Jan 27 21:47:26 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 21:47:26 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:26 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:27 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 21:47:27 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 21:47:28 crc kubenswrapper[4803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 21:47:28 crc kubenswrapper[4803]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 21:47:28 crc kubenswrapper[4803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 21:47:28 crc kubenswrapper[4803]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 21:47:28 crc kubenswrapper[4803]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 21:47:28 crc kubenswrapper[4803]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.010003 4803 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016754 4803 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016787 4803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016797 4803 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016806 4803 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016817 4803 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016827 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016836 4803 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016869 4803 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016877 4803 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016885 4803 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016894 4803 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016901 4803 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016920 4803 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016931 4803 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016940 4803 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016948 4803 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016956 4803 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016964 4803 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016973 4803 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016981 4803 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.016992 4803 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017002 4803 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017014 4803 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017026 4803 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017037 4803 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017048 4803 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017058 4803 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017068 4803 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017078 4803 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017088 4803 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017098 4803 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017109 4803 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017116 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017124 4803 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017132 4803 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017141 4803 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017149 4803 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017159 4803 feature_gate.go:330] unrecognized feature gate: Example Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017170 4803 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017180 4803 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017189 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017198 4803 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017208 4803 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017217 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017225 4803 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017233 4803 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017241 4803 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017248 4803 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017256 4803 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017263 4803 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017271 4803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017279 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017286 4803 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017293 4803 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017301 4803 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017309 4803 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017316 4803 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017324 4803 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017334 4803 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017342 4803 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017351 4803 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017359 4803 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017367 4803 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017375 4803 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017383 4803 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017394 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017402 4803 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017410 4803 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017418 4803 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017427 4803 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.017436 4803 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017647 4803 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017666 4803 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017681 4803 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017701 4803 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017714 4803 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017724 4803 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017739 4803 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017753 4803 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017765 4803 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017777 4803 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017789 4803 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017801 4803 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017813 4803 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017824 4803 flags.go:64] FLAG: --cgroup-root="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017833 4803 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017842 4803 flags.go:64] FLAG: --client-ca-file="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017894 4803 flags.go:64] FLAG: --cloud-config="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017906 4803 flags.go:64] FLAG: --cloud-provider="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017917 4803 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017931 4803 flags.go:64] FLAG: --cluster-domain="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017942 4803 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017952 4803 flags.go:64] FLAG: --config-dir="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017961 4803 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017971 4803 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017986 4803 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.017998 4803 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018009 4803 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018022 4803 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018053 4803 flags.go:64] FLAG: --contention-profiling="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018065 4803 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018075 4803 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018085 4803 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018096 4803 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018111 4803 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018122 4803 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018139 4803 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018156 4803 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018168 4803 flags.go:64] FLAG: --enable-server="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018179 4803 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018195 4803 flags.go:64] FLAG: --event-burst="100" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018207 4803 flags.go:64] FLAG: --event-qps="50" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018218 4803 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018230 4803 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018241 4803 flags.go:64] FLAG: --eviction-hard="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018255 4803 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018266 4803 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018278 4803 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018290 4803 flags.go:64] FLAG: --eviction-soft="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018302 4803 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018313 4803 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018324 4803 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018336 4803 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018349 4803 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018360 4803 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018371 4803 flags.go:64] FLAG: --feature-gates="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018385 4803 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018397 4803 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018410 4803 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018422 4803 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018434 4803 flags.go:64] FLAG: --healthz-port="10248" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018446 4803 flags.go:64] FLAG: --help="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018457 4803 flags.go:64] FLAG: --hostname-override="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018468 4803 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018480 4803 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018491 4803 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018502 4803 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018513 4803 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018531 4803 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018542 4803 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018553 4803 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018564 4803 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018576 4803 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018592 4803 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018604 4803 flags.go:64] FLAG: --kube-reserved="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018616 4803 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018627 4803 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018638 4803 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018649 4803 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018660 4803 flags.go:64] FLAG: --lock-file="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018672 4803 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018683 4803 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018695 4803 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018713 4803 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018725 4803 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018736 4803 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018747 4803 flags.go:64] FLAG: --logging-format="text" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018758 4803 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018771 4803 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018784 4803 flags.go:64] FLAG: --manifest-url="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018795 4803 flags.go:64] FLAG: --manifest-url-header="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018820 4803 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018831 4803 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018881 4803 flags.go:64] FLAG: --max-pods="110" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018895 4803 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018907 4803 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018918 4803 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018929 4803 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018941 4803 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018954 4803 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018967 4803 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.018993 4803 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019004 4803 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019016 4803 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019028 4803 flags.go:64] FLAG: --pod-cidr="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019040 4803 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019055 4803 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019067 4803 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019078 4803 flags.go:64] FLAG: --pods-per-core="0" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019095 4803 flags.go:64] FLAG: --port="10250" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019107 4803 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019118 4803 flags.go:64] FLAG: --provider-id="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019129 4803 flags.go:64] FLAG: --qos-reserved="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019141 4803 flags.go:64] FLAG: --read-only-port="10255" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019153 4803 flags.go:64] FLAG: --register-node="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019165 4803 flags.go:64] FLAG: --register-schedulable="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019176 4803 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019196 4803 flags.go:64] FLAG: --registry-burst="10" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019207 4803 flags.go:64] FLAG: --registry-qps="5" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019218 4803 flags.go:64] FLAG: --reserved-cpus="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019230 4803 flags.go:64] FLAG: --reserved-memory="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019244 4803 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019255 4803 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019267 4803 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019277 4803 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019287 4803 flags.go:64] FLAG: --runonce="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019295 4803 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019305 4803 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019314 4803 flags.go:64] FLAG: --seccomp-default="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019323 4803 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019333 4803 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019342 4803 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019351 4803 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019361 4803 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019369 4803 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019379 4803 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019388 4803 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019396 4803 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019406 4803 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019415 4803 flags.go:64] FLAG: --system-cgroups="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019423 4803 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019438 4803 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019446 4803 flags.go:64] FLAG: --tls-cert-file="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019455 4803 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019465 4803 flags.go:64] FLAG: --tls-min-version="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019475 4803 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019485 4803 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019494 4803 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019503 4803 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019512 4803 flags.go:64] FLAG: --v="2" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019523 4803 flags.go:64] FLAG: --version="false" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019535 4803 flags.go:64] FLAG: --vmodule="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019545 4803 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.019555 4803 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019757 4803 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019768 4803 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019777 4803 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019785 4803 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019793 4803 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019801 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019808 4803 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019816 4803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019827 4803 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019838 4803 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019879 4803 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019888 4803 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019896 4803 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019905 4803 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019914 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019922 4803 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019930 4803 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019938 4803 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019947 4803 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019955 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019971 4803 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019979 4803 feature_gate.go:330] unrecognized feature gate: Example Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019987 4803 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.019995 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020002 4803 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020011 4803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020019 4803 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020028 4803 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020037 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020045 4803 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020053 4803 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020069 4803 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020076 4803 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020084 4803 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020092 4803 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020100 4803 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020108 4803 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020116 4803 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020124 4803 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020132 4803 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020139 4803 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020147 4803 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020154 4803 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020164 4803 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020175 4803 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020183 4803 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020192 4803 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020200 4803 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020208 4803 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020219 4803 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020227 4803 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020242 4803 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020250 4803 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020258 4803 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020266 4803 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020273 4803 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020281 4803 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020288 4803 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020296 4803 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020304 4803 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020312 4803 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020324 4803 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020331 4803 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020340 4803 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020349 4803 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020357 4803 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020365 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020372 4803 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020380 4803 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020389 4803 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.020399 4803 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.020414 4803 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.032348 4803 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.032829 4803 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032914 4803 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032925 4803 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032930 4803 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032935 4803 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032939 4803 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032943 4803 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032947 4803 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032951 4803 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032955 4803 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032959 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032962 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032966 4803 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032969 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032975 4803 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032978 4803 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032982 4803 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032986 4803 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032990 4803 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032994 4803 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.032998 4803 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033002 4803 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033006 4803 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033010 4803 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033014 4803 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033020 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033024 4803 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033029 4803 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033034 4803 feature_gate.go:330] unrecognized feature gate: Example Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033038 4803 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033043 4803 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033047 4803 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033052 4803 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033056 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033061 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033065 4803 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033068 4803 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033072 4803 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033075 4803 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033079 4803 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033083 4803 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033086 4803 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033091 4803 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033095 4803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033099 4803 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033102 4803 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033106 4803 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033111 4803 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033115 4803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033119 4803 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033123 4803 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033127 4803 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033130 4803 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033134 4803 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033138 4803 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033142 4803 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033146 4803 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033149 4803 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033153 4803 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033156 4803 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033160 4803 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033163 4803 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033168 4803 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033172 4803 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033178 4803 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033182 4803 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033186 4803 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033189 4803 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033193 4803 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033196 4803 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033200 4803 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033204 4803 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.033211 4803 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033444 4803 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033454 4803 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033459 4803 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033463 4803 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033467 4803 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033471 4803 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033475 4803 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033478 4803 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033483 4803 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033546 4803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033551 4803 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033555 4803 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033559 4803 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033563 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033568 4803 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033573 4803 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033577 4803 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033581 4803 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033613 4803 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033618 4803 feature_gate.go:330] unrecognized feature gate: Example Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033622 4803 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033629 4803 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033632 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033637 4803 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033642 4803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033646 4803 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033650 4803 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033654 4803 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033658 4803 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033663 4803 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033667 4803 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033671 4803 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033675 4803 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033678 4803 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033905 4803 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033916 4803 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033921 4803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033925 4803 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033929 4803 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033932 4803 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033938 4803 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033943 4803 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033948 4803 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033952 4803 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033956 4803 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033960 4803 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033966 4803 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033971 4803 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033974 4803 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033978 4803 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033982 4803 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033986 4803 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033990 4803 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.033994 4803 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034867 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034882 4803 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034887 4803 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034894 4803 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034905 4803 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034909 4803 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034913 4803 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034917 4803 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034920 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034924 4803 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034928 4803 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034931 4803 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034935 4803 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034938 4803 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034942 4803 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034946 4803 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.034949 4803 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.034961 4803 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.037923 4803 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.044491 4803 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.044656 4803 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.046679 4803 server.go:997] "Starting client certificate rotation" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.046730 4803 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.046954 4803 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 19:37:47.061112866 +0000 UTC Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.047074 4803 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.074409 4803 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.077241 4803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.079131 4803 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.103633 4803 log.go:25] "Validated CRI v1 runtime API" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.147233 4803 log.go:25] "Validated CRI v1 image API" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.150884 4803 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.158000 4803 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-21-42-45-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.158065 4803 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.187101 4803 manager.go:217] Machine: {Timestamp:2026-01-27 21:47:28.183839495 +0000 UTC m=+0.599861264 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:676ec8ff-b158-409e-ada7-33047b2b95b9 BootID:a9610eea-40df-4e3a-82a8-03c1d35078a8 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:73:5e:f3 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:73:5e:f3 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:da:72:c5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:59:78:ed Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6d:c4:59 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:bc:e3:e0 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:a0:78:bf:2a:1c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a2:75:c3:72:eb:2a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.187562 4803 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.187809 4803 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.191764 4803 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.192888 4803 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.192960 4803 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.193342 4803 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.193362 4803 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.194051 4803 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.194105 4803 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.194511 4803 state_mem.go:36] "Initialized new in-memory state store" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.194660 4803 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.202230 4803 kubelet.go:418] "Attempting to sync node with API server" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.202279 4803 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.202325 4803 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.202346 4803 kubelet.go:324] "Adding apiserver pod source" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.202366 4803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.208948 4803 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.214283 4803 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.215071 4803 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.215197 4803 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.215106 4803 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.215366 4803 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.217125 4803 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.218927 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.218961 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.218973 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.218984 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.219000 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.219012 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.219022 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.219037 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.219050 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.219060 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.219075 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.219086 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.222312 4803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.223266 4803 server.go:1280] "Started kubelet" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.224592 4803 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.224964 4803 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.224593 4803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.225829 4803 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 21:47:28 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.230437 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.230511 4803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.231049 4803 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.230744 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:25:29.837347758 +0000 UTC Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.231347 4803 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.231410 4803 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.231697 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.231784 4803 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.231945 4803 server.go:460] "Adding debug handlers to kubelet server" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.233078 4803 factory.go:55] Registering systemd factory Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.233120 4803 factory.go:221] Registration of the systemd container factory successfully Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.233521 4803 factory.go:153] Registering CRI-O factory Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.233551 4803 factory.go:221] Registration of the crio container factory successfully Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.233656 4803 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.233689 4803 factory.go:103] Registering Raw factory Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.233720 4803 manager.go:1196] Started watching for new ooms in manager Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.235245 4803 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.235673 4803 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.238658 4803 manager.go:319] Starting recovery of all containers Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.236924 4803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eb4c1d67a3522 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 21:47:28.223204642 +0000 UTC m=+0.639226381,LastTimestamp:2026-01-27 21:47:28.223204642 +0000 UTC m=+0.639226381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260161 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260246 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260262 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260275 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260288 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260303 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260316 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260330 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260345 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260387 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260400 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260414 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260474 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260496 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260511 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260527 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260543 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260557 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260605 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260623 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260636 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260651 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260663 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260680 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260693 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260708 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260724 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260738 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260752 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260764 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260777 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260789 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260823 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260837 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260880 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260893 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260905 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260919 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260931 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260944 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260958 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260972 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260985 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.260997 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261045 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261058 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261071 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261158 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261181 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261194 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261207 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261232 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261253 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261268 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261282 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261296 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261311 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261328 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261342 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261354 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261367 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261380 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261397 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261410 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261424 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261463 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261475 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261488 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261501 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261515 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261527 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261540 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261563 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261579 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261592 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261606 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261620 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261635 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261648 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261663 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261679 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261693 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261706 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261719 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261733 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261753 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261766 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261780 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261793 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261807 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261821 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261835 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261866 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261884 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261918 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261943 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261963 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.261982 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262001 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262016 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262032 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262046 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262069 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262087 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262118 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262138 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262152 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262167 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262186 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262205 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262219 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262233 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262250 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262265 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262285 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262302 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262316 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262330 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262344 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262359 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262375 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262389 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262404 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262419 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262435 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262451 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262470 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262484 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262500 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262515 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262535 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262550 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262563 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262586 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262603 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262621 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262635 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262649 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262663 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262678 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262696 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262710 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262724 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262746 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262764 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262780 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262795 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262809 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262824 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262840 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262874 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262888 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262902 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262920 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262937 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262951 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262968 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.262982 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263002 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263019 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263035 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263049 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263064 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263115 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263130 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263150 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263193 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263208 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263222 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263237 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263252 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263306 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263322 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263338 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263355 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263369 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263385 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.263401 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267156 4803 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267243 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267276 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267301 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267323 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267345 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267368 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267390 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267411 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267433 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267452 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267472 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267491 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267509 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267528 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267549 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267568 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267590 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267614 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267637 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267656 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267677 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267703 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267725 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267744 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267767 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267788 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267808 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267834 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267884 4803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267905 4803 reconstruct.go:97] "Volume reconstruction finished" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.267919 4803 reconciler.go:26] "Reconciler: start to sync state" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.294824 4803 manager.go:324] Recovery completed Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.303280 4803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.305358 4803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.305433 4803 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.305476 4803 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.305579 4803 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.307272 4803 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.307339 4803 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.307588 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.309487 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.309524 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.309537 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.310367 4803 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.310395 4803 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.310421 4803 state_mem.go:36] "Initialized new in-memory state store" Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.331868 4803 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.333874 4803 policy_none.go:49] "None policy: Start" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.334951 4803 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.334994 4803 state_mem.go:35] "Initializing new in-memory state store" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.390429 4803 manager.go:334] "Starting Device Plugin manager" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.390479 4803 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.390490 4803 server.go:79] "Starting device plugin registration server" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.391135 4803 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.391150 4803 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.391373 4803 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.391442 4803 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.391449 4803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.404620 4803 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.406303 4803 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.406392 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.413324 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.413389 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.413409 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.413716 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.413914 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.413963 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.416476 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.416493 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.416509 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.416519 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.416537 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.416523 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.416746 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.417072 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.417139 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.417696 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.417730 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.417744 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.417922 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.418068 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.418111 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.418629 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.418683 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.418701 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.418872 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.418976 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419013 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419080 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419125 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419139 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419152 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419186 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419210 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419787 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419840 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419892 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419932 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419953 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.419964 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.420161 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.420206 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.421301 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.421330 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.421342 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.432774 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470639 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470687 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470715 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470740 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470762 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470781 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470803 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470824 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470863 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470883 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470906 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470927 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470947 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470965 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.470983 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.491312 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.492882 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.492943 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.492964 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.493004 4803 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.493702 4803 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.572706 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.572804 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.572875 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.572908 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.572942 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.572943 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.572976 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573008 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573012 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573042 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573066 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573077 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573113 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573079 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573012 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573129 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573128 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573185 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573096 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573215 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573294 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573348 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573362 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573460 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573442 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573501 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573557 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573619 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573659 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.573916 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.694554 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.696425 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.696478 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.696552 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.696637 4803 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.697297 4803 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.754295 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.763773 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.800057 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.809590 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: I0127 21:47:28.814812 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.820338 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-52df8eecfb4b4e46a706782aa21906191dde01921291cb9592d849de6df889f2 WatchSource:0}: Error finding container 52df8eecfb4b4e46a706782aa21906191dde01921291cb9592d849de6df889f2: Status 404 returned error can't find the container with id 52df8eecfb4b4e46a706782aa21906191dde01921291cb9592d849de6df889f2 Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.821677 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-dd44505b93c8bfdca21b0b3d576c05ab1bc3afce997051c0f6f974d101feaa0e WatchSource:0}: Error finding container dd44505b93c8bfdca21b0b3d576c05ab1bc3afce997051c0f6f974d101feaa0e: Status 404 returned error can't find the container with id dd44505b93c8bfdca21b0b3d576c05ab1bc3afce997051c0f6f974d101feaa0e Jan 27 21:47:28 crc kubenswrapper[4803]: E0127 21:47:28.833798 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.835296 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f31baa606fab23045ede5d01adae85611f4961e0fccd4d04f96df787dc188396 WatchSource:0}: Error finding container f31baa606fab23045ede5d01adae85611f4961e0fccd4d04f96df787dc188396: Status 404 returned error can't find the container with id f31baa606fab23045ede5d01adae85611f4961e0fccd4d04f96df787dc188396 Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.839938 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-41ae4a386e49df429f9654ea720621b92bc7e5a80c1e42f39d0ccf26695f674e WatchSource:0}: Error finding container 41ae4a386e49df429f9654ea720621b92bc7e5a80c1e42f39d0ccf26695f674e: Status 404 returned error can't find the container with id 41ae4a386e49df429f9654ea720621b92bc7e5a80c1e42f39d0ccf26695f674e Jan 27 21:47:28 crc kubenswrapper[4803]: W0127 21:47:28.841282 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4d1d85403aa737636ab56f0ea3f428f5be617aaab087dbf392c7b6b988a4f3f8 WatchSource:0}: Error finding container 4d1d85403aa737636ab56f0ea3f428f5be617aaab087dbf392c7b6b988a4f3f8: Status 404 returned error can't find the container with id 4d1d85403aa737636ab56f0ea3f428f5be617aaab087dbf392c7b6b988a4f3f8 Jan 27 21:47:29 crc kubenswrapper[4803]: W0127 21:47:29.096810 4803 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:29 crc kubenswrapper[4803]: E0127 21:47:29.096965 4803 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.097760 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.099906 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.099945 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.099961 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.099993 4803 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 21:47:29 crc kubenswrapper[4803]: E0127 21:47:29.100616 4803 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 27 21:47:29 crc kubenswrapper[4803]: W0127 21:47:29.136403 4803 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:29 crc kubenswrapper[4803]: E0127 21:47:29.136508 4803 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.226727 4803 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.231718 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:44:47.467658913 +0000 UTC Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.312635 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f31baa606fab23045ede5d01adae85611f4961e0fccd4d04f96df787dc188396"} Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.313782 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd44505b93c8bfdca21b0b3d576c05ab1bc3afce997051c0f6f974d101feaa0e"} Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.314777 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"52df8eecfb4b4e46a706782aa21906191dde01921291cb9592d849de6df889f2"} Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.316022 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4d1d85403aa737636ab56f0ea3f428f5be617aaab087dbf392c7b6b988a4f3f8"} Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.317175 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"41ae4a386e49df429f9654ea720621b92bc7e5a80c1e42f39d0ccf26695f674e"} Jan 27 21:47:29 crc kubenswrapper[4803]: W0127 21:47:29.461237 4803 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:29 crc kubenswrapper[4803]: E0127 21:47:29.461321 4803 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:29 crc kubenswrapper[4803]: W0127 21:47:29.466183 4803 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:29 crc kubenswrapper[4803]: E0127 21:47:29.466234 4803 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:29 crc kubenswrapper[4803]: E0127 21:47:29.634487 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.901546 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.905072 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.905137 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.905155 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:29 crc kubenswrapper[4803]: I0127 21:47:29.905198 4803 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 21:47:29 crc kubenswrapper[4803]: E0127 21:47:29.905914 4803 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.218671 4803 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 21:47:30 crc kubenswrapper[4803]: E0127 21:47:30.220448 4803 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.226604 4803 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.232641 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:16:21.134995199 +0000 UTC Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.325159 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b"} Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.325213 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411"} Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.325228 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f"} Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.325240 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e"} Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.325332 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.326318 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.326348 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.326360 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.329614 4803 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a" exitCode=0 Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.329703 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a"} Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.329920 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.331506 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.331587 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.331609 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.334414 4803 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8693bdd98a1d67a6b1a61ba14f61a2a252f03d3834a72479c400c8ff3d635f07" exitCode=0 Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.334478 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.334865 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8693bdd98a1d67a6b1a61ba14f61a2a252f03d3834a72479c400c8ff3d635f07"} Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.334900 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.336122 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.336161 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.336174 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.336898 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.336933 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.336946 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.337938 4803 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e975b1dac74feab22752f002c2212778ab0e6c1e88827c8543b4b61c644856cf" exitCode=0 Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.338010 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e975b1dac74feab22752f002c2212778ab0e6c1e88827c8543b4b61c644856cf"} Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.338047 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.339560 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.339627 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.339652 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.341058 4803 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa" exitCode=0 Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.341138 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa"} Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.341197 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.342316 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.342362 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.342379 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:30 crc kubenswrapper[4803]: I0127 21:47:30.351686 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.225639 4803 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.233572 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:26:04.400309047 +0000 UTC Jan 27 21:47:31 crc kubenswrapper[4803]: E0127 21:47:31.234942 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Jan 27 21:47:31 crc kubenswrapper[4803]: W0127 21:47:31.319185 4803 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:31 crc kubenswrapper[4803]: E0127 21:47:31.319330 4803 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.347428 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2ffe7f19851c6226af442882ecaa7514cc38d6bd1467881cbb700190fb58cd04"} Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.347496 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8db7e62956ef3526e02fdb5bc208185103cfbe40b86346dc993fb956bdb15cf8"} Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.347516 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37"} Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.347660 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.349559 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.349605 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.349624 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.355240 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba"} Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.355275 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba"} Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.355304 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078"} Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.355319 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a"} Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.357666 4803 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fc926c2b8c1b7aa9b1e12579cdd7f925d8e430f26e20f193f5cb98d92d5f544b" exitCode=0 Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.357760 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.357970 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fc926c2b8c1b7aa9b1e12579cdd7f925d8e430f26e20f193f5cb98d92d5f544b"} Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.359315 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.359357 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.359369 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.364788 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"88ea961809ff0b077fbd2ff2061c4753b5e4655d3485608255d07d5a4566ab20"} Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.364813 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.364814 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.365791 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.365836 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.365874 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.366579 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.366609 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.366625 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.506313 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.507330 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.507358 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.507367 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:31 crc kubenswrapper[4803]: I0127 21:47:31.507387 4803 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 21:47:31 crc kubenswrapper[4803]: E0127 21:47:31.507820 4803 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 27 21:47:31 crc kubenswrapper[4803]: W0127 21:47:31.539048 4803 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:31 crc kubenswrapper[4803]: E0127 21:47:31.539111 4803 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:31 crc kubenswrapper[4803]: W0127 21:47:31.908419 4803 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 27 21:47:31 crc kubenswrapper[4803]: E0127 21:47:31.908552 4803 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.234550 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:03:51.050039643 +0000 UTC Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.268949 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.372541 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2"} Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.372652 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.374068 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.374128 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.374152 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.375313 4803 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7f67f10b3d14ef814db94edc15335233c30f40607e061befa8c3fd290ac441dc" exitCode=0 Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.375435 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.375479 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.375490 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.376366 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7f67f10b3d14ef814db94edc15335233c30f40607e061befa8c3fd290ac441dc"} Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.376493 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.376607 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.377924 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.377970 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.377996 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.378094 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.378122 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.378140 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.377927 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.378192 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.378207 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.377996 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.378267 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.378282 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:32 crc kubenswrapper[4803]: I0127 21:47:32.535973 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.234779 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:16:29.730354972 +0000 UTC Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.382597 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.382659 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.382955 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"97cb16c37944ef5f3da40e23b7a3a77ce3150bf3093f9ea02a5a695d96558af2"} Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.383004 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"23a66174488c19476823f460f78ec292dcf80eec37ef0ca11ec226ea8b54c167"} Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.383015 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"18fb04a09039ba54129ad2a00251e8c573a9503653d84595b9adfd70cf2c8e33"} Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.383026 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"146b92bce74e7e87ab315c1912a33e14f44621bf25311be46ec30fea86332818"} Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.383154 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.383929 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.383967 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.383979 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.384305 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.384346 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:33 crc kubenswrapper[4803]: I0127 21:47:33.384361 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.235963 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 04:02:40.624796957 +0000 UTC Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.395188 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"449c9660ee222480f940fc0d0d9648c38109a5c30ce698fe969b293dd9fc528a"} Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.395251 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.395306 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.395340 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.397013 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.397062 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.397079 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.397015 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.397125 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.397176 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.497611 4803 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.708551 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.710348 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.710413 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.710438 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:34 crc kubenswrapper[4803]: I0127 21:47:34.710477 4803 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.099029 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.237133 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:06:44.347575826 +0000 UTC Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.398252 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.399701 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.399764 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.399787 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.425708 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.425929 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.425986 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.427471 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.427517 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.427536 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:35 crc kubenswrapper[4803]: I0127 21:47:35.833769 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.199277 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.199514 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.201353 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.201399 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.201417 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.238338 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:52:48.590525206 +0000 UTC Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.241632 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.401887 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.401889 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.403690 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.403743 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.403761 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.404279 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.404336 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:36 crc kubenswrapper[4803]: I0127 21:47:36.404354 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:37 crc kubenswrapper[4803]: I0127 21:47:37.239278 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:32:52.594944507 +0000 UTC Jan 27 21:47:37 crc kubenswrapper[4803]: I0127 21:47:37.405122 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:37 crc kubenswrapper[4803]: I0127 21:47:37.406487 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:37 crc kubenswrapper[4803]: I0127 21:47:37.406536 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:37 crc kubenswrapper[4803]: I0127 21:47:37.406553 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:37 crc kubenswrapper[4803]: I0127 21:47:37.560358 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:37 crc kubenswrapper[4803]: I0127 21:47:37.560572 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:37 crc kubenswrapper[4803]: I0127 21:47:37.562662 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:37 crc kubenswrapper[4803]: I0127 21:47:37.562748 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:37 crc kubenswrapper[4803]: I0127 21:47:37.562766 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:37 crc kubenswrapper[4803]: I0127 21:47:37.568178 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:38 crc kubenswrapper[4803]: I0127 21:47:38.177543 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 21:47:38 crc kubenswrapper[4803]: I0127 21:47:38.178155 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:38 crc kubenswrapper[4803]: I0127 21:47:38.179945 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:38 crc kubenswrapper[4803]: I0127 21:47:38.180023 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:38 crc kubenswrapper[4803]: I0127 21:47:38.180051 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:38 crc kubenswrapper[4803]: I0127 21:47:38.240318 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:10:02.408999897 +0000 UTC Jan 27 21:47:38 crc kubenswrapper[4803]: E0127 21:47:38.405515 4803 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 21:47:38 crc kubenswrapper[4803]: I0127 21:47:38.406873 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:38 crc kubenswrapper[4803]: I0127 21:47:38.407740 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:38 crc kubenswrapper[4803]: I0127 21:47:38.407773 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:38 crc kubenswrapper[4803]: I0127 21:47:38.407787 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:39 crc kubenswrapper[4803]: I0127 21:47:39.199965 4803 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 21:47:39 crc kubenswrapper[4803]: I0127 21:47:39.200062 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 21:47:39 crc kubenswrapper[4803]: I0127 21:47:39.241785 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:56:20.131322488 +0000 UTC Jan 27 21:47:40 crc kubenswrapper[4803]: I0127 21:47:40.242905 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:26:52.531278677 +0000 UTC Jan 27 21:47:40 crc kubenswrapper[4803]: I0127 21:47:40.357368 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:40 crc kubenswrapper[4803]: I0127 21:47:40.357529 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:40 crc kubenswrapper[4803]: I0127 21:47:40.359456 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:40 crc kubenswrapper[4803]: I0127 21:47:40.359523 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:40 crc kubenswrapper[4803]: I0127 21:47:40.359546 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:41 crc kubenswrapper[4803]: I0127 21:47:41.243186 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:55:52.80159053 +0000 UTC Jan 27 21:47:42 crc kubenswrapper[4803]: I0127 21:47:42.032314 4803 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 21:47:42 crc kubenswrapper[4803]: I0127 21:47:42.032398 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 21:47:42 crc kubenswrapper[4803]: I0127 21:47:42.037076 4803 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 21:47:42 crc kubenswrapper[4803]: I0127 21:47:42.037145 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 21:47:42 crc kubenswrapper[4803]: I0127 21:47:42.243656 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:52:53.458838278 +0000 UTC Jan 27 21:47:43 crc kubenswrapper[4803]: I0127 21:47:43.244149 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:41:38.293616256 +0000 UTC Jan 27 21:47:44 crc kubenswrapper[4803]: I0127 21:47:44.245051 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:53:48.125343455 +0000 UTC Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.130973 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.131134 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.132441 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.132511 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.132531 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.151341 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.245618 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 18:37:34.609364349 +0000 UTC Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.424752 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.425617 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.425682 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.425700 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.432082 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.432334 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.433686 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.433725 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.433733 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:45 crc kubenswrapper[4803]: I0127 21:47:45.439817 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:46 crc kubenswrapper[4803]: I0127 21:47:46.245986 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:42:46.162125142 +0000 UTC Jan 27 21:47:46 crc kubenswrapper[4803]: I0127 21:47:46.428336 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:46 crc kubenswrapper[4803]: I0127 21:47:46.429620 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:46 crc kubenswrapper[4803]: I0127 21:47:46.429660 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:46 crc kubenswrapper[4803]: I0127 21:47:46.429674 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.038546 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.038603 4803 trace.go:236] Trace[362137353]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 21:47:34.956) (total time: 12082ms): Jan 27 21:47:47 crc kubenswrapper[4803]: Trace[362137353]: ---"Objects listed" error: 12082ms (21:47:47.038) Jan 27 21:47:47 crc kubenswrapper[4803]: Trace[362137353]: [12.082086363s] [12.082086363s] END Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.038632 4803 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.039004 4803 trace.go:236] Trace[726111356]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 21:47:36.990) (total time: 10047ms): Jan 27 21:47:47 crc kubenswrapper[4803]: Trace[726111356]: ---"Objects listed" error: 10047ms (21:47:47.038) Jan 27 21:47:47 crc kubenswrapper[4803]: Trace[726111356]: [10.04791822s] [10.04791822s] END Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.039046 4803 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.042688 4803 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.042761 4803 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.043044 4803 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.047321 4803 trace.go:236] Trace[702375816]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 21:47:32.546) (total time: 14501ms): Jan 27 21:47:47 crc kubenswrapper[4803]: Trace[702375816]: ---"Objects listed" error: 14500ms (21:47:47.047) Jan 27 21:47:47 crc kubenswrapper[4803]: Trace[702375816]: [14.501117287s] [14.501117287s] END Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.047355 4803 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.049559 4803 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.084158 4803 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59006->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.084300 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59006->192.168.126.11:17697: read: connection reset by peer" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.084637 4803 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36276->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.084721 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36276->192.168.126.11:17697: read: connection reset by peer" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.085250 4803 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.085311 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.085766 4803 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.085812 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.097656 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.112288 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.215474 4803 apiserver.go:52] "Watching apiserver" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.218503 4803 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.218878 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.219321 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.220024 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.220119 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.220251 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.220535 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.220572 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.220641 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.220652 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.221320 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.224097 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.224565 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.224623 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.224780 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.225538 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.226012 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.226908 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.227211 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.227289 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.232579 4803 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.243715 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.243776 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.243825 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.243892 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.243929 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.243960 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.243991 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244018 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244042 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244063 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244084 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244073 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244115 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244150 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244185 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244215 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244243 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244266 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244292 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244316 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244341 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244347 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244365 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244389 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244414 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244438 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244462 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244484 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244507 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244528 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244555 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244579 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244601 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244634 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244656 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244678 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244703 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244725 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244751 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244771 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244794 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244818 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244840 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244895 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244934 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.244991 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245106 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245114 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245119 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245143 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245168 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245192 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245215 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245241 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245266 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245290 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245315 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245338 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245360 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245383 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245403 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245426 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245448 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245471 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245495 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245518 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245543 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245569 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245597 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245623 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245648 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245674 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245700 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245727 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245753 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245775 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245799 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245822 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246587 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246661 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246689 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246713 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246741 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246771 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246794 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246818 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246841 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246903 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246926 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246954 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246978 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247001 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247023 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247047 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247069 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247091 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247115 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247140 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247164 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247185 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247209 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247235 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247258 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247282 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247308 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247331 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247353 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247377 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247449 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247473 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247498 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247522 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247547 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247572 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247600 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247628 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247656 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247679 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247704 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247728 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247752 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247779 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247803 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247830 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247883 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247919 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247944 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247967 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247992 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248016 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248040 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248069 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248093 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248122 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248149 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248173 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248197 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248222 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248246 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248271 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248294 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248318 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248344 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248369 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248400 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248426 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248449 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248473 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248497 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248520 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248542 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248569 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248597 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248621 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248647 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248671 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248694 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248721 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248745 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248779 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248820 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248882 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248916 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248942 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248990 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249016 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249041 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249065 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249090 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249117 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249146 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249172 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249200 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249224 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249249 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249273 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249300 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249324 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249349 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249376 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249400 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249425 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249451 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249476 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249500 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249524 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249552 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249577 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249602 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245257 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245505 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245719 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.245873 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.249685 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:47:47.749609976 +0000 UTC m=+20.165631675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246161 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246215 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246345 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246377 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:27:50.831173837 +0000 UTC Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.246788 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247254 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247276 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247556 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247585 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247599 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247640 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247774 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.247804 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248053 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248092 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248100 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248175 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248229 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248540 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248545 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248547 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248837 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248913 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.248930 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249127 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249130 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249133 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249341 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249412 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249477 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249486 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249941 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249623 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250202 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250404 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250480 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250484 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250635 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.249762 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250726 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250767 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250806 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250833 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250876 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250874 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250901 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250930 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250956 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.250980 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251069 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251061 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251040 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251177 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251227 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251254 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251281 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251307 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251330 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251357 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251394 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251424 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251451 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251495 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251501 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251463 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251523 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251832 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251916 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251940 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.251975 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.252014 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.252034 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.252047 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.252060 4803 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.252071 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.252081 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.252061 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.252733 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.253078 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.253177 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.253214 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.253227 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.253378 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.253464 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.253722 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.254090 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.254086 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.254306 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.254352 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.254410 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.254616 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.254698 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.254391 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.254808 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.255646 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.256073 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.256263 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.256289 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.256312 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.256770 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.256924 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.257371 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.257272 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.257531 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.257573 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.257577 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.257884 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.257312 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258099 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258519 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258559 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258592 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258653 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258655 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258692 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258733 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258760 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258783 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258907 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.258987 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.259016 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.259270 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.259385 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.259455 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.259456 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.259950 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.260887 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.260903 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.260903 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.260969 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.261223 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.261554 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.261956 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.261960 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.262134 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.262195 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.262287 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.262329 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.262605 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.262701 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.262766 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.262873 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.263075 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.263142 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.263269 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.263568 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.263592 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.263593 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.263866 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.263874 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.264147 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.264230 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.265887 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.265905 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.265920 4803 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.266004 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:47.765964957 +0000 UTC m=+20.181986646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.266224 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.266640 4803 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.266811 4803 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.266895 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:47.766876282 +0000 UTC m=+20.182897991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.266943 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:47.766933324 +0000 UTC m=+20.182955043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.267032 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.267119 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.267126 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.267161 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.264377 4803 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.267842 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.267883 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.268659 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.268708 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271762 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271795 4803 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271813 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271826 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271871 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271889 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271904 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271916 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271929 4803 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271941 4803 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271953 4803 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271965 4803 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.271978 4803 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272002 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272035 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272048 4803 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272061 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272075 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272088 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272102 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272115 4803 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272128 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272141 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272154 4803 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272166 4803 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272179 4803 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272664 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272677 4803 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272693 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272705 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272718 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272732 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272745 4803 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272758 4803 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272771 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272784 4803 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272799 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272813 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272827 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272840 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272871 4803 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272884 4803 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.272323 4803 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.274088 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.274187 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.282892 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.283921 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.284612 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.285379 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.286325 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.287743 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.288093 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.288129 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.288143 4803 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.288210 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:47.788187695 +0000 UTC m=+20.204209394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.288437 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.288572 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.288617 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.289187 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.289288 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.289603 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.289743 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.290325 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.290625 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.292204 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.293869 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.293963 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.296029 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.300192 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.302021 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.302034 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.302091 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.302116 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.302582 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.302661 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.302669 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.302985 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.303019 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.303051 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.303063 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.303407 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.303452 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.303659 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.303673 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.303710 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.303735 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.304292 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.304377 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.304583 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.304891 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.304929 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.305198 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.306550 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.306739 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.307275 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.307315 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.307399 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.307536 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.307599 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.307658 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.307708 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.307758 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.307792 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.307946 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.308347 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.308377 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.308704 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.309603 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.311772 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.312283 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.312333 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.314325 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.318750 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.323952 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.324016 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.328452 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.332229 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.342601 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.343313 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.354949 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373336 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373490 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373555 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373569 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373581 4803 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373592 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373593 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373624 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373602 4803 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373668 4803 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373680 4803 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373694 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373708 4803 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373741 4803 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373754 4803 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373763 4803 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373776 4803 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373808 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373825 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373838 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373880 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373892 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373905 4803 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373918 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373948 4803 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373960 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373969 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373978 4803 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.373987 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.374105 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.374835 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.374885 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.374896 4803 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.374923 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375003 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375060 4803 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375086 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375115 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375137 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375159 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375179 4803 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375201 4803 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375222 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375243 4803 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375265 4803 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375294 4803 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375323 4803 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375346 4803 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375368 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375392 4803 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375412 4803 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375432 4803 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375452 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375472 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375492 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375513 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375535 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375559 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375582 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375603 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375624 4803 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375643 4803 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375663 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375682 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375703 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375726 4803 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375746 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375767 4803 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375787 4803 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375807 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375828 4803 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375878 4803 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375900 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375922 4803 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375943 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375964 4803 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.375985 4803 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376005 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376025 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376045 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376066 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376086 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376105 4803 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376132 4803 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376154 4803 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376174 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376195 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376214 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376234 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376255 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376281 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376310 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376333 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376354 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376373 4803 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376395 4803 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376415 4803 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376435 4803 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376456 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376477 4803 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376497 4803 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376520 4803 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376541 4803 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376562 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376584 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376605 4803 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376625 4803 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376651 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376672 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376691 4803 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376709 4803 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376729 4803 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376749 4803 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376769 4803 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376789 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376810 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376828 4803 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376875 4803 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376894 4803 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376952 4803 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376973 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.376993 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377012 4803 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377030 4803 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377049 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377068 4803 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377087 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377108 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377130 4803 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377154 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377174 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377193 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377213 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377233 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377253 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377276 4803 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377305 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377330 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377349 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377369 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377389 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377410 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377429 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377449 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377469 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377487 4803 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377507 4803 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.377525 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.432212 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.436355 4803 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2" exitCode=255 Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.436443 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2"} Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.444971 4803 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.453126 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.453647 4803 scope.go:117] "RemoveContainer" containerID="78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.454167 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.466743 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.527488 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.542488 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.553399 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.555961 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.580424 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.588949 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.600049 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: W0127 21:47:47.622452 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c99cb75bce132191d0a14b3336af55fcc629125226dcd1f16372c8f2a88d0196 WatchSource:0}: Error finding container c99cb75bce132191d0a14b3336af55fcc629125226dcd1f16372c8f2a88d0196: Status 404 returned error can't find the container with id c99cb75bce132191d0a14b3336af55fcc629125226dcd1f16372c8f2a88d0196 Jan 27 21:47:47 crc kubenswrapper[4803]: W0127 21:47:47.632799 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-e4a5e1643d42e6264245d2cb4d78f52c5724d79e9a232cc3e425c5e3a01dee1e WatchSource:0}: Error finding container e4a5e1643d42e6264245d2cb4d78f52c5724d79e9a232cc3e425c5e3a01dee1e: Status 404 returned error can't find the container with id e4a5e1643d42e6264245d2cb4d78f52c5724d79e9a232cc3e425c5e3a01dee1e Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.642612 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.781079 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.781834 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:47:48.781811745 +0000 UTC m=+21.197833444 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.784095 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.784179 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.784216 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.784460 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.784476 4803 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.784493 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.784510 4803 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.784540 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:48.784521876 +0000 UTC m=+21.200543585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.784593 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:48.784570257 +0000 UTC m=+21.200591966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.784629 4803 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.784672 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:48.78466523 +0000 UTC m=+21.200686939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: I0127 21:47:47.885696 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.885900 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.885922 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.885936 4803 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:47 crc kubenswrapper[4803]: E0127 21:47:47.886004 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:48.885986008 +0000 UTC m=+21.302007707 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.075892 4803 csr.go:261] certificate signing request csr-wsqdj is approved, waiting to be issued Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.087992 4803 csr.go:257] certificate signing request csr-wsqdj is issued Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.249973 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:39:44.279093889 +0000 UTC Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.309927 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.310423 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.311196 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.311749 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.312947 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.313401 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.314304 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.314799 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.315742 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.316231 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.317119 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.317730 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.318227 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.318578 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.319089 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.319560 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.320401 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.320967 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.321836 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.322419 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.322969 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.323740 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.324342 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.324768 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.325707 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.326104 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.327068 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.327649 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.328484 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.329029 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.329786 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.330226 4803 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.330320 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.332195 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.332659 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.333173 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.333983 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.334626 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.335620 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.336108 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.337168 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.337801 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.338928 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.339515 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.342766 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.343698 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.344713 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.345270 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.347353 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.348445 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.349523 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.351406 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.351913 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.355480 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.356121 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.356904 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.357952 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.358186 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.374611 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.386223 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.396639 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.410906 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.411643 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-gwmq2"] Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.412000 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gwmq2" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.414683 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.414777 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.417241 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.425404 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.435962 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.442766 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.444277 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b"} Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.445139 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.446277 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e4a5e1643d42e6264245d2cb4d78f52c5724d79e9a232cc3e425c5e3a01dee1e"} Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.448865 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6"} Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.448899 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb"} Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.448912 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a3b8863f9c9cc5b3564eab3be2e48731103a94ff7feb065ccc441b74efde498c"} Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.449544 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.451522 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65"} Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.451555 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c99cb75bce132191d0a14b3336af55fcc629125226dcd1f16372c8f2a88d0196"} Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.467925 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.489586 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4shf8\" (UniqueName: \"kubernetes.io/projected/8dba4d19-a8ee-4103-94e5-b1e0b352df62-kube-api-access-4shf8\") pod \"node-resolver-gwmq2\" (UID: \"8dba4d19-a8ee-4103-94e5-b1e0b352df62\") " pod="openshift-dns/node-resolver-gwmq2" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.489649 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8dba4d19-a8ee-4103-94e5-b1e0b352df62-hosts-file\") pod \"node-resolver-gwmq2\" (UID: \"8dba4d19-a8ee-4103-94e5-b1e0b352df62\") " pod="openshift-dns/node-resolver-gwmq2" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.499134 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.518383 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.535880 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.575216 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.590400 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8dba4d19-a8ee-4103-94e5-b1e0b352df62-hosts-file\") pod \"node-resolver-gwmq2\" (UID: \"8dba4d19-a8ee-4103-94e5-b1e0b352df62\") " pod="openshift-dns/node-resolver-gwmq2" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.590564 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4shf8\" (UniqueName: \"kubernetes.io/projected/8dba4d19-a8ee-4103-94e5-b1e0b352df62-kube-api-access-4shf8\") pod \"node-resolver-gwmq2\" (UID: \"8dba4d19-a8ee-4103-94e5-b1e0b352df62\") " pod="openshift-dns/node-resolver-gwmq2" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.590604 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8dba4d19-a8ee-4103-94e5-b1e0b352df62-hosts-file\") pod \"node-resolver-gwmq2\" (UID: \"8dba4d19-a8ee-4103-94e5-b1e0b352df62\") " pod="openshift-dns/node-resolver-gwmq2" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.593950 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.606039 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.609247 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4shf8\" (UniqueName: \"kubernetes.io/projected/8dba4d19-a8ee-4103-94e5-b1e0b352df62-kube-api-access-4shf8\") pod \"node-resolver-gwmq2\" (UID: \"8dba4d19-a8ee-4103-94e5-b1e0b352df62\") " pod="openshift-dns/node-resolver-gwmq2" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.621690 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.635752 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.654089 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.671951 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.687047 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.699868 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.714636 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.723695 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gwmq2" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.727682 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: W0127 21:47:48.744153 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dba4d19_a8ee_4103_94e5_b1e0b352df62.slice/crio-d57eeec0691180042aa46bcea01949816cf909affa959dba6f0435ac66c5891c WatchSource:0}: Error finding container d57eeec0691180042aa46bcea01949816cf909affa959dba6f0435ac66c5891c: Status 404 returned error can't find the container with id d57eeec0691180042aa46bcea01949816cf909affa959dba6f0435ac66c5891c Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.793911 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.793999 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.794031 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.794059 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.794212 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.794231 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.794237 4803 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.794299 4803 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.794344 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:50.794314919 +0000 UTC m=+23.210336618 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.794369 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:50.79436023 +0000 UTC m=+23.210381929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.794248 4803 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.794416 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:47:50.794395421 +0000 UTC m=+23.210417310 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.794441 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:50.794430722 +0000 UTC m=+23.210452631 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.825827 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-qnns7"] Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.826254 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-d56gp"] Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.826468 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.826513 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-m87bw"] Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.826639 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.827428 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.831883 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.832169 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.833062 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.833212 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.833277 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.833223 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.833380 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.833386 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.833081 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.833434 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.833763 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.833936 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.854170 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.876336 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.895391 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-run-k8s-cni-cncf-io\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.895471 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-hostroot\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.895508 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-run-multus-certs\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.895684 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aeb23e3d-ee70-4f1d-85c0-005373cca336-mcd-auth-proxy-config\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.895783 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.895810 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-socket-dir-parent\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.895836 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-daemon-config\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.895913 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.895945 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aeb23e3d-ee70-4f1d-85c0-005373cca336-proxy-tls\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.895970 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk2r2\" (UniqueName: \"kubernetes.io/projected/14e37235-ed32-42bc-b5b0-49278fed9593-kube-api-access-rk2r2\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.895999 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-os-release\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896029 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-var-lib-cni-bin\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896054 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/14e37235-ed32-42bc-b5b0-49278fed9593-cni-binary-copy\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896089 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-var-lib-cni-multus\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896127 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-system-cni-dir\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896149 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47kbb\" (UniqueName: \"kubernetes.io/projected/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-kube-api-access-47kbb\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896176 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-os-release\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896201 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/14e37235-ed32-42bc-b5b0-49278fed9593-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896223 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-run-netns\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896247 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-var-lib-kubelet\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896271 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-conf-dir\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.896425 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896477 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-cnibin\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.896483 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.896521 4803 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896516 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-cni-binary-copy\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896574 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-cni-dir\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: E0127 21:47:48.896634 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:50.896616954 +0000 UTC m=+23.312638653 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896669 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-cnibin\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896713 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aeb23e3d-ee70-4f1d-85c0-005373cca336-rootfs\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896747 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-etc-kubernetes\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896803 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flmnp\" (UniqueName: \"kubernetes.io/projected/aeb23e3d-ee70-4f1d-85c0-005373cca336-kube-api-access-flmnp\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.896898 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-system-cni-dir\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.922347 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.955353 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.997472 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.997785 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-cni-dir\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.997826 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-cnibin\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.997868 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aeb23e3d-ee70-4f1d-85c0-005373cca336-rootfs\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.997885 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-etc-kubernetes\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.997905 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flmnp\" (UniqueName: \"kubernetes.io/projected/aeb23e3d-ee70-4f1d-85c0-005373cca336-kube-api-access-flmnp\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.997930 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-system-cni-dir\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.997947 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-run-k8s-cni-cncf-io\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.997961 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-hostroot\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.997978 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-run-multus-certs\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998007 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aeb23e3d-ee70-4f1d-85c0-005373cca336-mcd-auth-proxy-config\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998026 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998042 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-socket-dir-parent\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998057 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-daemon-config\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998078 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aeb23e3d-ee70-4f1d-85c0-005373cca336-proxy-tls\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998095 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk2r2\" (UniqueName: \"kubernetes.io/projected/14e37235-ed32-42bc-b5b0-49278fed9593-kube-api-access-rk2r2\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998113 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-os-release\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998112 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-cni-dir\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998129 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-var-lib-cni-bin\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998182 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-var-lib-cni-bin\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998186 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/14e37235-ed32-42bc-b5b0-49278fed9593-cni-binary-copy\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998211 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-var-lib-cni-multus\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998242 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-system-cni-dir\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998253 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-cnibin\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998262 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47kbb\" (UniqueName: \"kubernetes.io/projected/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-kube-api-access-47kbb\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998281 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aeb23e3d-ee70-4f1d-85c0-005373cca336-rootfs\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998284 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-os-release\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998305 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-etc-kubernetes\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998308 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/14e37235-ed32-42bc-b5b0-49278fed9593-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998329 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-run-netns\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998346 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-var-lib-kubelet\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998362 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-conf-dir\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998382 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-cnibin\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998399 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-cni-binary-copy\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998799 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-system-cni-dir\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998858 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-run-k8s-cni-cncf-io\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998884 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-hostroot\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.998909 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-run-multus-certs\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.999081 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-cni-binary-copy\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.999615 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/14e37235-ed32-42bc-b5b0-49278fed9593-cni-binary-copy\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.999638 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aeb23e3d-ee70-4f1d-85c0-005373cca336-mcd-auth-proxy-config\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.999653 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-var-lib-cni-multus\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:48 crc kubenswrapper[4803]: I0127 21:47:48.999683 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-system-cni-dir\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.000115 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-os-release\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.000255 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.000318 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-socket-dir-parent\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.000643 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/14e37235-ed32-42bc-b5b0-49278fed9593-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.000708 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-run-netns\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.000741 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-host-var-lib-kubelet\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.000766 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-conf-dir\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.000786 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-cnibin\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.000804 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-multus-daemon-config\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.000883 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/14e37235-ed32-42bc-b5b0-49278fed9593-os-release\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.004032 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aeb23e3d-ee70-4f1d-85c0-005373cca336-proxy-tls\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.020301 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flmnp\" (UniqueName: \"kubernetes.io/projected/aeb23e3d-ee70-4f1d-85c0-005373cca336-kube-api-access-flmnp\") pod \"machine-config-daemon-d56gp\" (UID: \"aeb23e3d-ee70-4f1d-85c0-005373cca336\") " pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.024772 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47kbb\" (UniqueName: \"kubernetes.io/projected/2a912f01-6d26-421f-8b21-fb2f98d5c2e6-kube-api-access-47kbb\") pod \"multus-qnns7\" (UID: \"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\") " pod="openshift-multus/multus-qnns7" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.032499 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk2r2\" (UniqueName: \"kubernetes.io/projected/14e37235-ed32-42bc-b5b0-49278fed9593-kube-api-access-rk2r2\") pod \"multus-additional-cni-plugins-m87bw\" (UID: \"14e37235-ed32-42bc-b5b0-49278fed9593\") " pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.040869 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.079054 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.089247 4803 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 21:42:48 +0000 UTC, rotation deadline is 2026-10-30 21:19:58.193785021 +0000 UTC Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.089355 4803 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6623h32m9.104434133s for next certificate rotation Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.099303 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.115904 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.136676 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.145176 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qnns7" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.148832 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.156082 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:47:49 crc kubenswrapper[4803]: W0127 21:47:49.156707 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a912f01_6d26_421f_8b21_fb2f98d5c2e6.slice/crio-fcef4a021a6baa5691a023e629aa2ead09db011ff654e6d186e43a925694bbd1 WatchSource:0}: Error finding container fcef4a021a6baa5691a023e629aa2ead09db011ff654e6d186e43a925694bbd1: Status 404 returned error can't find the container with id fcef4a021a6baa5691a023e629aa2ead09db011ff654e6d186e43a925694bbd1 Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.161787 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.162117 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-m87bw" Jan 27 21:47:49 crc kubenswrapper[4803]: W0127 21:47:49.170023 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeb23e3d_ee70_4f1d_85c0_005373cca336.slice/crio-a756ceba5397830ea02055e8f189feb3b723a05e2e77a9a649afec34228a1c3e WatchSource:0}: Error finding container a756ceba5397830ea02055e8f189feb3b723a05e2e77a9a649afec34228a1c3e: Status 404 returned error can't find the container with id a756ceba5397830ea02055e8f189feb3b723a05e2e77a9a649afec34228a1c3e Jan 27 21:47:49 crc kubenswrapper[4803]: W0127 21:47:49.179147 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14e37235_ed32_42bc_b5b0_49278fed9593.slice/crio-4672ee0c8a180ef8c44253253094e976d75cf6b438076d0a50df4b9e379d1b53 WatchSource:0}: Error finding container 4672ee0c8a180ef8c44253253094e976d75cf6b438076d0a50df4b9e379d1b53: Status 404 returned error can't find the container with id 4672ee0c8a180ef8c44253253094e976d75cf6b438076d0a50df4b9e379d1b53 Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.185693 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.199803 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.221938 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.241484 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6dhj4"] Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.242014 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.242613 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.244970 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.245358 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.245482 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.245576 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.245688 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.245882 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.250245 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:49:35.952230557 +0000 UTC Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.250346 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.262125 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.280068 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.298994 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.301653 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-kubelet\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.301712 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xnhr\" (UniqueName: \"kubernetes.io/projected/db438ee2-57c2-4cbf-9d4b-96f8587647d6-kube-api-access-4xnhr\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.301756 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-bin\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.301784 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-node-log\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.301868 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-systemd-units\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.301919 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-netns\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.301943 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-openvswitch\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.301972 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovn-node-metrics-cert\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.301998 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-config\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.302140 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-slash\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.304583 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-netd\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.304644 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-etc-openvswitch\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.304774 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-systemd\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.304878 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-ovn\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.304901 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-log-socket\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.304944 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-script-lib\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.304971 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-env-overrides\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.305033 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-ovn-kubernetes\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.305053 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-var-lib-openvswitch\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.305075 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.306566 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.306616 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.306566 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:49 crc kubenswrapper[4803]: E0127 21:47:49.306713 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:47:49 crc kubenswrapper[4803]: E0127 21:47:49.306960 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:47:49 crc kubenswrapper[4803]: E0127 21:47:49.307046 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.315383 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.332170 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.345289 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.357649 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.371003 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.385223 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.399684 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.406749 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-config\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407211 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-slash\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407238 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-netd\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407257 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-etc-openvswitch\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407276 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-systemd\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407292 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-ovn\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407310 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-log-socket\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407332 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-script-lib\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407353 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-ovn-kubernetes\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407368 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-env-overrides\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407385 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-var-lib-openvswitch\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407401 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407441 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-kubelet\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407458 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xnhr\" (UniqueName: \"kubernetes.io/projected/db438ee2-57c2-4cbf-9d4b-96f8587647d6-kube-api-access-4xnhr\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407480 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-node-log\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407495 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-bin\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407511 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-systemd-units\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407530 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-netns\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407545 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-openvswitch\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407563 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovn-node-metrics-cert\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.407732 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-config\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408112 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-node-log\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408165 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-var-lib-openvswitch\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408193 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-env-overrides\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408204 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408238 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-slash\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408241 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-kubelet\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408328 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-ovn\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408378 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-systemd\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408406 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-netd\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408433 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-systemd-units\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408446 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-ovn-kubernetes\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408475 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-openvswitch\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408455 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-netns\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408532 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-etc-openvswitch\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408581 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-bin\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.408955 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-script-lib\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.409040 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-log-socket\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.412115 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovn-node-metrics-cert\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.420965 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.426570 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xnhr\" (UniqueName: \"kubernetes.io/projected/db438ee2-57c2-4cbf-9d4b-96f8587647d6-kube-api-access-4xnhr\") pod \"ovnkube-node-6dhj4\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.437801 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.451310 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.454613 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51"} Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.454653 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7"} Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.454666 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"a756ceba5397830ea02055e8f189feb3b723a05e2e77a9a649afec34228a1c3e"} Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.456012 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" event={"ID":"14e37235-ed32-42bc-b5b0-49278fed9593","Type":"ContainerStarted","Data":"f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2"} Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.456056 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" event={"ID":"14e37235-ed32-42bc-b5b0-49278fed9593","Type":"ContainerStarted","Data":"4672ee0c8a180ef8c44253253094e976d75cf6b438076d0a50df4b9e379d1b53"} Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.457874 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qnns7" event={"ID":"2a912f01-6d26-421f-8b21-fb2f98d5c2e6","Type":"ContainerStarted","Data":"693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5"} Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.457898 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qnns7" event={"ID":"2a912f01-6d26-421f-8b21-fb2f98d5c2e6","Type":"ContainerStarted","Data":"fcef4a021a6baa5691a023e629aa2ead09db011ff654e6d186e43a925694bbd1"} Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.460056 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gwmq2" event={"ID":"8dba4d19-a8ee-4103-94e5-b1e0b352df62","Type":"ContainerStarted","Data":"4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef"} Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.460102 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gwmq2" event={"ID":"8dba4d19-a8ee-4103-94e5-b1e0b352df62","Type":"ContainerStarted","Data":"d57eeec0691180042aa46bcea01949816cf909affa959dba6f0435ac66c5891c"} Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.464794 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.483713 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.501988 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.513211 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.529840 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.550819 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.557242 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.571730 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.605209 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.636172 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.677288 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.702293 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.718318 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.732812 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.750144 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.765153 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.782074 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.796054 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.815072 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:49 crc kubenswrapper[4803]: I0127 21:47:49.830445 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:49Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.252722 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:10:21.248457886 +0000 UTC Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.468764 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade" exitCode=0 Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.468817 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade"} Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.468888 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"854e03ada2428f3caddbacc5284f818977e9a30ba55be33a226a6a94747b0196"} Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.470810 4803 generic.go:334] "Generic (PLEG): container finished" podID="14e37235-ed32-42bc-b5b0-49278fed9593" containerID="f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2" exitCode=0 Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.470873 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" event={"ID":"14e37235-ed32-42bc-b5b0-49278fed9593","Type":"ContainerDied","Data":"f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2"} Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.483427 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.501671 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.515241 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.525462 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.540775 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.557547 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.585306 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.622607 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.641083 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.668874 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.683733 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.696042 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.709764 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.723544 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.737249 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.754985 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.780118 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.793347 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.808103 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.820372 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.822439 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.822553 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.822600 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.822652 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:47:54.822609796 +0000 UTC m=+27.238631525 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.822696 4803 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.822729 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.822751 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:54.822726949 +0000 UTC m=+27.238748648 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.823016 4803 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.823051 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:54.823041978 +0000 UTC m=+27.239063667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.823100 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.823136 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.823158 4803 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.823235 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:54.823214162 +0000 UTC m=+27.239235891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.837707 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.850867 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.865424 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.877632 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.908009 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.923391 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.923516 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.923532 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.923544 4803 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:50 crc kubenswrapper[4803]: E0127 21:47:50.923584 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 21:47:54.923571995 +0000 UTC m=+27.339593694 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:50 crc kubenswrapper[4803]: I0127 21:47:50.924556 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.253258 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:04:48.008523915 +0000 UTC Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.305976 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:51 crc kubenswrapper[4803]: E0127 21:47:51.306138 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.306291 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.306304 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:51 crc kubenswrapper[4803]: E0127 21:47:51.306468 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:47:51 crc kubenswrapper[4803]: E0127 21:47:51.306555 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.477464 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c"} Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.480625 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" event={"ID":"14e37235-ed32-42bc-b5b0-49278fed9593","Type":"ContainerStarted","Data":"8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c"} Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.485262 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904"} Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.485310 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d"} Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.485322 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0"} Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.485332 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1"} Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.485342 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386"} Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.493123 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.509330 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.522643 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.535234 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.553145 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.565603 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.577391 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-flq97"] Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.577865 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-flq97" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.580139 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.580403 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.580557 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.583152 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.583648 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.598623 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.613942 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.627172 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.644118 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.659921 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.680221 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.695833 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.707973 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.719157 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.732622 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4970974-561c-402f-9b67-aa8c43445762-host\") pod \"node-ca-flq97\" (UID: \"c4970974-561c-402f-9b67-aa8c43445762\") " pod="openshift-image-registry/node-ca-flq97" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.732745 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7mcx\" (UniqueName: \"kubernetes.io/projected/c4970974-561c-402f-9b67-aa8c43445762-kube-api-access-t7mcx\") pod \"node-ca-flq97\" (UID: \"c4970974-561c-402f-9b67-aa8c43445762\") " pod="openshift-image-registry/node-ca-flq97" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.732828 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c4970974-561c-402f-9b67-aa8c43445762-serviceca\") pod \"node-ca-flq97\" (UID: \"c4970974-561c-402f-9b67-aa8c43445762\") " pod="openshift-image-registry/node-ca-flq97" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.736681 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.747088 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.762665 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.777814 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.793308 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.819378 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.834468 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7mcx\" (UniqueName: \"kubernetes.io/projected/c4970974-561c-402f-9b67-aa8c43445762-kube-api-access-t7mcx\") pod \"node-ca-flq97\" (UID: \"c4970974-561c-402f-9b67-aa8c43445762\") " pod="openshift-image-registry/node-ca-flq97" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.834520 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c4970974-561c-402f-9b67-aa8c43445762-serviceca\") pod \"node-ca-flq97\" (UID: \"c4970974-561c-402f-9b67-aa8c43445762\") " pod="openshift-image-registry/node-ca-flq97" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.834560 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4970974-561c-402f-9b67-aa8c43445762-host\") pod \"node-ca-flq97\" (UID: \"c4970974-561c-402f-9b67-aa8c43445762\") " pod="openshift-image-registry/node-ca-flq97" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.834634 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4970974-561c-402f-9b67-aa8c43445762-host\") pod \"node-ca-flq97\" (UID: \"c4970974-561c-402f-9b67-aa8c43445762\") " pod="openshift-image-registry/node-ca-flq97" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.835671 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c4970974-561c-402f-9b67-aa8c43445762-serviceca\") pod \"node-ca-flq97\" (UID: \"c4970974-561c-402f-9b67-aa8c43445762\") " pod="openshift-image-registry/node-ca-flq97" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.860174 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.884902 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7mcx\" (UniqueName: \"kubernetes.io/projected/c4970974-561c-402f-9b67-aa8c43445762-kube-api-access-t7mcx\") pod \"node-ca-flq97\" (UID: \"c4970974-561c-402f-9b67-aa8c43445762\") " pod="openshift-image-registry/node-ca-flq97" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.902341 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-flq97" Jan 27 21:47:51 crc kubenswrapper[4803]: W0127 21:47:51.920792 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4970974_561c_402f_9b67_aa8c43445762.slice/crio-51097106eb70742b88ee21e0566d4d52ae18df9aded5922aa9e2d8d391d3731a WatchSource:0}: Error finding container 51097106eb70742b88ee21e0566d4d52ae18df9aded5922aa9e2d8d391d3731a: Status 404 returned error can't find the container with id 51097106eb70742b88ee21e0566d4d52ae18df9aded5922aa9e2d8d391d3731a Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.923279 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.960954 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:51 crc kubenswrapper[4803]: I0127 21:47:51.999266 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.043210 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.254057 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:14:02.355033105 +0000 UTC Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.491758 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1"} Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.493736 4803 generic.go:334] "Generic (PLEG): container finished" podID="14e37235-ed32-42bc-b5b0-49278fed9593" containerID="8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c" exitCode=0 Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.493795 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" event={"ID":"14e37235-ed32-42bc-b5b0-49278fed9593","Type":"ContainerDied","Data":"8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c"} Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.497212 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-flq97" event={"ID":"c4970974-561c-402f-9b67-aa8c43445762","Type":"ContainerStarted","Data":"6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84"} Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.497256 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-flq97" event={"ID":"c4970974-561c-402f-9b67-aa8c43445762","Type":"ContainerStarted","Data":"51097106eb70742b88ee21e0566d4d52ae18df9aded5922aa9e2d8d391d3731a"} Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.507782 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.524949 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.564550 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.588799 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.627461 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.639369 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.651017 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.662772 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.674187 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.688018 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.703676 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.718974 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.733142 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.746331 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.758499 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.771590 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.784425 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.797475 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.815552 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.837282 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.877628 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.918541 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.970058 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:52 crc kubenswrapper[4803]: I0127 21:47:52.997319 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:52Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.038772 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.077179 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.122462 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.159549 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.255141 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:38:52.294739208 +0000 UTC Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.306606 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.306651 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.306692 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:53 crc kubenswrapper[4803]: E0127 21:47:53.306751 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:47:53 crc kubenswrapper[4803]: E0127 21:47:53.306928 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:47:53 crc kubenswrapper[4803]: E0127 21:47:53.307119 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.442841 4803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.445322 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.445386 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.445398 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.445576 4803 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.453311 4803 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.453609 4803 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.455034 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.455094 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.455177 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.455198 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.455218 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:53Z","lastTransitionTime":"2026-01-27T21:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:53 crc kubenswrapper[4803]: E0127 21:47:53.471491 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.476338 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.476375 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.476386 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.476404 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.476417 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:53Z","lastTransitionTime":"2026-01-27T21:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:53 crc kubenswrapper[4803]: E0127 21:47:53.494427 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.500019 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.500059 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.500074 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.500095 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.500107 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:53Z","lastTransitionTime":"2026-01-27T21:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.503126 4803 generic.go:334] "Generic (PLEG): container finished" podID="14e37235-ed32-42bc-b5b0-49278fed9593" containerID="06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558" exitCode=0 Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.503182 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" event={"ID":"14e37235-ed32-42bc-b5b0-49278fed9593","Type":"ContainerDied","Data":"06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558"} Jan 27 21:47:53 crc kubenswrapper[4803]: E0127 21:47:53.513429 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.518092 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.518132 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.518168 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.518195 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.518210 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:53Z","lastTransitionTime":"2026-01-27T21:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.521038 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: E0127 21:47:53.535388 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.539620 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.540363 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.540394 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.540405 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.540422 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.540434 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:53Z","lastTransitionTime":"2026-01-27T21:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.557132 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: E0127 21:47:53.558424 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: E0127 21:47:53.558613 4803 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.562590 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.562635 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.562647 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.562667 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.562679 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:53Z","lastTransitionTime":"2026-01-27T21:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.580421 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.592309 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.605073 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.619393 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.632687 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.645176 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.660714 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.665301 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.665364 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.665381 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.665416 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.665434 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:53Z","lastTransitionTime":"2026-01-27T21:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.674479 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.688266 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.718568 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.764564 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.767650 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.767690 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.767702 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.767719 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.767730 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:53Z","lastTransitionTime":"2026-01-27T21:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.870240 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.870274 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.870283 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.870297 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.870308 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:53Z","lastTransitionTime":"2026-01-27T21:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.973087 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.973167 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.973183 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.973210 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:53 crc kubenswrapper[4803]: I0127 21:47:53.973225 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:53Z","lastTransitionTime":"2026-01-27T21:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.075924 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.076001 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.076021 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.076052 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.076072 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:54Z","lastTransitionTime":"2026-01-27T21:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.178902 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.178943 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.178955 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.178970 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.178981 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:54Z","lastTransitionTime":"2026-01-27T21:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.256238 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:06:01.63386455 +0000 UTC Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.282041 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.282496 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.282511 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.282533 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.282546 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:54Z","lastTransitionTime":"2026-01-27T21:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.385201 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.385241 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.385251 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.385269 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.385282 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:54Z","lastTransitionTime":"2026-01-27T21:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.487512 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.487560 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.487573 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.487593 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.487604 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:54Z","lastTransitionTime":"2026-01-27T21:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.508906 4803 generic.go:334] "Generic (PLEG): container finished" podID="14e37235-ed32-42bc-b5b0-49278fed9593" containerID="608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244" exitCode=0 Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.508988 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" event={"ID":"14e37235-ed32-42bc-b5b0-49278fed9593","Type":"ContainerDied","Data":"608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244"} Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.514689 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495"} Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.547580 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.559090 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.572741 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.589882 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.589909 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.589918 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.589930 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.589945 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:54Z","lastTransitionTime":"2026-01-27T21:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.590750 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.613394 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.624708 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.638556 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.652598 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.846438 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.846540 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.846603 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.846627 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.847218 4803 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.847316 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:02.847295407 +0000 UTC m=+35.263317116 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.847670 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:48:02.847660767 +0000 UTC m=+35.263682566 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.847709 4803 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.847736 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:02.847729029 +0000 UTC m=+35.263750728 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.848164 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.848177 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.848188 4803 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.848216 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:02.848207951 +0000 UTC m=+35.264229660 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.865750 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.867269 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.867303 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.867357 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.867387 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:54Z","lastTransitionTime":"2026-01-27T21:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.878462 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.893823 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.909058 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.926046 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.947458 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.947713 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.947758 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.947773 4803 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:54 crc kubenswrapper[4803]: E0127 21:47:54.947876 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:02.947830705 +0000 UTC m=+35.363852394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.949252 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.963051 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.970158 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.970199 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.970213 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.970233 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:54 crc kubenswrapper[4803]: I0127 21:47:54.970245 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:54Z","lastTransitionTime":"2026-01-27T21:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.073419 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.073477 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.073490 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.073510 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.073524 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:55Z","lastTransitionTime":"2026-01-27T21:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.177022 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.177083 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.177095 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.177115 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.177132 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:55Z","lastTransitionTime":"2026-01-27T21:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.257240 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:55:30.008190004 +0000 UTC Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.279634 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.279683 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.279692 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.279712 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.279726 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:55Z","lastTransitionTime":"2026-01-27T21:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.305959 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.306043 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:55 crc kubenswrapper[4803]: E0127 21:47:55.306243 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.306304 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:55 crc kubenswrapper[4803]: E0127 21:47:55.306355 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:47:55 crc kubenswrapper[4803]: E0127 21:47:55.306492 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.383275 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.383324 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.383335 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.383355 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.383368 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:55Z","lastTransitionTime":"2026-01-27T21:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.487886 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.487919 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.487931 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.487948 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.487987 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:55Z","lastTransitionTime":"2026-01-27T21:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.591020 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.591083 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.591099 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.591116 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.591129 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:55Z","lastTransitionTime":"2026-01-27T21:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.693418 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.693461 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.693477 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.693495 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.693506 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:55Z","lastTransitionTime":"2026-01-27T21:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.796062 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.796102 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.796115 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.796131 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.796144 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:55Z","lastTransitionTime":"2026-01-27T21:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.899143 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.899183 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.899194 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.899207 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:55 crc kubenswrapper[4803]: I0127 21:47:55.899218 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:55Z","lastTransitionTime":"2026-01-27T21:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.002882 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.003319 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.003332 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.003353 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.003366 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:56Z","lastTransitionTime":"2026-01-27T21:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.107356 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.107432 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.107451 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.107477 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.107495 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:56Z","lastTransitionTime":"2026-01-27T21:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.209932 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.209968 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.209978 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.209993 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.210003 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:56Z","lastTransitionTime":"2026-01-27T21:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.258382 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 03:16:45.817147956 +0000 UTC Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.312388 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.312445 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.312459 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.312478 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.312491 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:56Z","lastTransitionTime":"2026-01-27T21:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.415624 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.415679 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.415697 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.415721 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.415737 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:56Z","lastTransitionTime":"2026-01-27T21:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.518517 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.518583 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.518601 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.518628 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.518647 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:56Z","lastTransitionTime":"2026-01-27T21:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.536534 4803 generic.go:334] "Generic (PLEG): container finished" podID="14e37235-ed32-42bc-b5b0-49278fed9593" containerID="fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12" exitCode=0 Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.536619 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" event={"ID":"14e37235-ed32-42bc-b5b0-49278fed9593","Type":"ContainerDied","Data":"fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.546576 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.546932 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.577967 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.597326 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.604447 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.620630 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.623768 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.623814 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.623833 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.623888 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.623905 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:56Z","lastTransitionTime":"2026-01-27T21:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.641737 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.660587 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.681511 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.703324 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.716481 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.726714 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.726749 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.726768 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.726787 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.726800 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:56Z","lastTransitionTime":"2026-01-27T21:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.736899 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.746296 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.758695 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.769089 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.780470 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.799866 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.812467 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.822142 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.829250 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.829392 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.829450 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.829510 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.829564 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:56Z","lastTransitionTime":"2026-01-27T21:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.834813 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.851933 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.862700 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.876392 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.889023 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.903726 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.916806 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.931955 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.932000 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.932010 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.932027 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.932037 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:56Z","lastTransitionTime":"2026-01-27T21:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.932422 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.949377 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.969677 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:56 crc kubenswrapper[4803]: I0127 21:47:56.984563 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.005433 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.035133 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.035192 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.035209 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.035234 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.035251 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:57Z","lastTransitionTime":"2026-01-27T21:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.137538 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.137571 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.137579 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.137593 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.137602 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:57Z","lastTransitionTime":"2026-01-27T21:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.240048 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.240094 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.240104 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.240121 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.240131 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:57Z","lastTransitionTime":"2026-01-27T21:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.259069 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:44:31.287717394 +0000 UTC Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.305662 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.305718 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:57 crc kubenswrapper[4803]: E0127 21:47:57.305804 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:47:57 crc kubenswrapper[4803]: E0127 21:47:57.305931 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.305988 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:57 crc kubenswrapper[4803]: E0127 21:47:57.306051 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.342914 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.342964 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.342981 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.343003 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.343020 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:57Z","lastTransitionTime":"2026-01-27T21:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.445827 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.445924 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.445942 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.445962 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.445978 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:57Z","lastTransitionTime":"2026-01-27T21:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.548928 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.548982 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.548999 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.549055 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.549073 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:57Z","lastTransitionTime":"2026-01-27T21:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.553517 4803 generic.go:334] "Generic (PLEG): container finished" podID="14e37235-ed32-42bc-b5b0-49278fed9593" containerID="6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d" exitCode=0 Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.553552 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" event={"ID":"14e37235-ed32-42bc-b5b0-49278fed9593","Type":"ContainerDied","Data":"6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d"} Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.553688 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.554143 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.577122 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.584493 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.600412 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.614728 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.629758 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.653024 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.653122 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.653169 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.653184 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.653204 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.653222 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:57Z","lastTransitionTime":"2026-01-27T21:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.666598 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.682888 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.696117 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.708695 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.721757 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.737248 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.751024 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.756965 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.757026 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.757039 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.757058 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.757071 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:57Z","lastTransitionTime":"2026-01-27T21:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.770129 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.781882 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.793973 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.808364 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.822091 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.834902 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.851650 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.859406 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.859443 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.859452 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.859469 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.859479 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:57Z","lastTransitionTime":"2026-01-27T21:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.861933 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.875343 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.883948 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.894933 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.904769 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.915251 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.929002 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.940488 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.952333 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.962893 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.962924 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.962936 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.962954 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:57 crc kubenswrapper[4803]: I0127 21:47:57.962964 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:57Z","lastTransitionTime":"2026-01-27T21:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.047796 4803 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.070771 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.070817 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.070829 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.070873 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.070886 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:58Z","lastTransitionTime":"2026-01-27T21:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.173865 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.173905 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.173915 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.173929 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.173939 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:58Z","lastTransitionTime":"2026-01-27T21:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.259968 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:15:48.583333889 +0000 UTC Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.276681 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.276743 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.276759 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.277150 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.277201 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:58Z","lastTransitionTime":"2026-01-27T21:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.321571 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.346141 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.365390 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.379568 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.379616 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.379627 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.379651 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.379664 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:58Z","lastTransitionTime":"2026-01-27T21:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.387318 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.419686 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.434668 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.448702 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.460690 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.473281 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.481549 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.481661 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.481683 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.481716 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.481743 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:58Z","lastTransitionTime":"2026-01-27T21:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.485574 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.499124 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.510182 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.523592 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.538566 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.559142 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.559928 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" event={"ID":"14e37235-ed32-42bc-b5b0-49278fed9593","Type":"ContainerStarted","Data":"a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56"} Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.581172 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.583435 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.583493 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.583540 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.583564 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.583580 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:58Z","lastTransitionTime":"2026-01-27T21:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.594712 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.604816 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.615401 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.633581 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.645237 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.658624 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.671298 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.682121 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.685820 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.685874 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.685883 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.685898 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.685907 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:58Z","lastTransitionTime":"2026-01-27T21:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.696362 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.707703 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.719667 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.730264 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.739935 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.788781 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.788832 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.788853 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.788877 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.788895 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:58Z","lastTransitionTime":"2026-01-27T21:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.891327 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.891362 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.891370 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.891386 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.891396 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:58Z","lastTransitionTime":"2026-01-27T21:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.994079 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.994117 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.994128 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.994144 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:58 crc kubenswrapper[4803]: I0127 21:47:58.994156 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:58Z","lastTransitionTime":"2026-01-27T21:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.096141 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.096171 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.096178 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.096191 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.096202 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:59Z","lastTransitionTime":"2026-01-27T21:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.198319 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.198362 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.198374 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.198392 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.198405 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:59Z","lastTransitionTime":"2026-01-27T21:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.260839 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:49:11.845973442 +0000 UTC Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.300480 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.300524 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.300537 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.300553 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.300566 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:59Z","lastTransitionTime":"2026-01-27T21:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.305941 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.305965 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.305979 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:47:59 crc kubenswrapper[4803]: E0127 21:47:59.306058 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:47:59 crc kubenswrapper[4803]: E0127 21:47:59.306143 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:47:59 crc kubenswrapper[4803]: E0127 21:47:59.306207 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.402278 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.402323 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.402334 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.402351 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.402363 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:59Z","lastTransitionTime":"2026-01-27T21:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.504513 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.504550 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.504561 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.504576 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.504586 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:59Z","lastTransitionTime":"2026-01-27T21:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.564154 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/0.log" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.567303 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc" exitCode=1 Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.567352 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc"} Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.568445 4803 scope.go:117] "RemoveContainer" containerID="7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.586278 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.603630 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.609273 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.609302 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.609312 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.609328 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.609340 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:59Z","lastTransitionTime":"2026-01-27T21:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.620302 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.634078 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.651239 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:47:59Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0127 21:47:59.005542 6088 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 21:47:59.005575 6088 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 21:47:59.005698 6088 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 21:47:59.005810 6088 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 21:47:59.006100 6088 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 21:47:59.006378 6088 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 21:47:59.006421 6088 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.663369 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.676641 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.684946 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.696596 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.710398 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.711504 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.711542 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.711554 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.711572 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.711582 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:59Z","lastTransitionTime":"2026-01-27T21:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.723426 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.739678 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.750183 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.762512 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.813606 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.813648 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.813659 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.813675 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.813689 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:59Z","lastTransitionTime":"2026-01-27T21:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.915929 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.915963 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.915973 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.915989 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:47:59 crc kubenswrapper[4803]: I0127 21:47:59.915998 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:47:59Z","lastTransitionTime":"2026-01-27T21:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.018745 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.018797 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.018809 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.018828 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.018865 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:00Z","lastTransitionTime":"2026-01-27T21:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.121633 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.121668 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.121678 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.121691 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.121700 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:00Z","lastTransitionTime":"2026-01-27T21:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.224449 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.224492 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.224506 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.224522 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.224531 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:00Z","lastTransitionTime":"2026-01-27T21:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.261994 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:23:38.576484463 +0000 UTC Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.326647 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.326706 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.326718 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.326742 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.326756 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:00Z","lastTransitionTime":"2026-01-27T21:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.429616 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.429662 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.429671 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.429688 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.429699 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:00Z","lastTransitionTime":"2026-01-27T21:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.532131 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.532303 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.532391 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.532488 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.532581 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:00Z","lastTransitionTime":"2026-01-27T21:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.572868 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/1.log" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.573689 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/0.log" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.576068 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b" exitCode=1 Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.576154 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b"} Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.576346 4803 scope.go:117] "RemoveContainer" containerID="7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.576662 4803 scope.go:117] "RemoveContainer" containerID="f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b" Jan 27 21:48:00 crc kubenswrapper[4803]: E0127 21:48:00.576991 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.592342 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.604910 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.622329 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.636121 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.636195 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.636218 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.636249 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.636269 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:00Z","lastTransitionTime":"2026-01-27T21:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.639424 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.657257 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b45b342099f76ca02becd90c215a07622d8cce152adc8b63680b35adf45c2cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:47:59Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0127 21:47:59.005542 6088 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 21:47:59.005575 6088 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 21:47:59.005698 6088 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 21:47:59.005810 6088 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 21:47:59.006100 6088 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 21:47:59.006378 6088 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 21:47:59.006421 6088 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:00Z\\\",\\\"message\\\":\\\"ontroller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z]\\\\nI0127 21:48:00.516127 6263 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 18.76µs\\\\nI0127 21:48:00.516111 6263 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Pr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.671386 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.685577 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.697897 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.719381 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.733696 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.744276 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.744398 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.744432 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.744458 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.744477 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:00Z","lastTransitionTime":"2026-01-27T21:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.760592 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.775287 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.788220 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.803207 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.848475 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.848529 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.848540 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.848558 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.848571 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:00Z","lastTransitionTime":"2026-01-27T21:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.951383 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.951432 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.951441 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.951461 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:00 crc kubenswrapper[4803]: I0127 21:48:00.951474 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:00Z","lastTransitionTime":"2026-01-27T21:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.055097 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.055136 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.055148 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.055165 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.055176 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:01Z","lastTransitionTime":"2026-01-27T21:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.158719 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.158768 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.158776 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.158803 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.158814 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:01Z","lastTransitionTime":"2026-01-27T21:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.261245 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.261293 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.261308 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.261327 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.261339 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:01Z","lastTransitionTime":"2026-01-27T21:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.262418 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:41:22.428061128 +0000 UTC Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.306040 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.306068 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.306210 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:01 crc kubenswrapper[4803]: E0127 21:48:01.306336 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:01 crc kubenswrapper[4803]: E0127 21:48:01.306505 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:01 crc kubenswrapper[4803]: E0127 21:48:01.306732 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.364723 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.364773 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.364790 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.364813 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.364831 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:01Z","lastTransitionTime":"2026-01-27T21:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.468821 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.468948 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.468974 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.469004 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.469025 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:01Z","lastTransitionTime":"2026-01-27T21:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.571952 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.572029 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.572055 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.572087 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.572111 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:01Z","lastTransitionTime":"2026-01-27T21:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.582389 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/1.log" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.587166 4803 scope.go:117] "RemoveContainer" containerID="f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b" Jan 27 21:48:01 crc kubenswrapper[4803]: E0127 21:48:01.587428 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.608080 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.630552 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.650448 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.674752 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.674788 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.674800 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.674816 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.674829 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:01Z","lastTransitionTime":"2026-01-27T21:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.684630 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:00Z\\\",\\\"message\\\":\\\"ontroller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z]\\\\nI0127 21:48:00.516127 6263 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 18.76µs\\\\nI0127 21:48:00.516111 6263 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Pr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.705098 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.723624 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.743422 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.769483 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.777917 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.778107 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.778225 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.778325 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.778416 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:01Z","lastTransitionTime":"2026-01-27T21:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.789641 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.811270 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.837456 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.839663 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m"] Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.840323 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.842811 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.843187 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.861219 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.881197 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.881263 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.881274 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.881315 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.881330 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:01Z","lastTransitionTime":"2026-01-27T21:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.882904 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.902290 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.921148 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nsfb\" (UniqueName: \"kubernetes.io/projected/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-kube-api-access-4nsfb\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.921259 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.921319 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.921631 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.932745 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:00Z\\\",\\\"message\\\":\\\"ontroller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z]\\\\nI0127 21:48:00.516127 6263 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 18.76µs\\\\nI0127 21:48:00.516111 6263 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Pr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.948983 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.961683 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.975752 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.983727 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.983803 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.983830 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.983916 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.984120 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:01Z","lastTransitionTime":"2026-01-27T21:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:01 crc kubenswrapper[4803]: I0127 21:48:01.998201 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.017053 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.022622 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.022695 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.022800 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.022901 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nsfb\" (UniqueName: \"kubernetes.io/projected/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-kube-api-access-4nsfb\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.023757 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.024476 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.032686 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.039362 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.040687 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nsfb\" (UniqueName: \"kubernetes.io/projected/7c089f04-d9e7-4bca-b221-dfaf322e1ea0-kube-api-access-4nsfb\") pod \"ovnkube-control-plane-749d76644c-kvp7m\" (UID: \"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.055692 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.073225 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.086512 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.087116 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.087168 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.087178 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.087193 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.087204 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:02Z","lastTransitionTime":"2026-01-27T21:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.106506 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.119553 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.134066 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.151936 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.158807 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.171987 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: W0127 21:48:02.172036 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c089f04_d9e7_4bca_b221_dfaf322e1ea0.slice/crio-c671bbddcf6e0b14d8879425a78187cf06b898e3528d85b13b68c86e28dbabde WatchSource:0}: Error finding container c671bbddcf6e0b14d8879425a78187cf06b898e3528d85b13b68c86e28dbabde: Status 404 returned error can't find the container with id c671bbddcf6e0b14d8879425a78187cf06b898e3528d85b13b68c86e28dbabde Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.190317 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.190370 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.190383 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.190400 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.190412 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:02Z","lastTransitionTime":"2026-01-27T21:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.262793 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:57:58.403443601 +0000 UTC Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.292611 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.292650 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.292661 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.292677 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.292688 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:02Z","lastTransitionTime":"2026-01-27T21:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.394595 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.394648 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.394664 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.394685 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.394700 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:02Z","lastTransitionTime":"2026-01-27T21:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.497982 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.498024 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.498035 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.498053 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.498066 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:02Z","lastTransitionTime":"2026-01-27T21:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.592540 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" event={"ID":"7c089f04-d9e7-4bca-b221-dfaf322e1ea0","Type":"ContainerStarted","Data":"4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.592640 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" event={"ID":"7c089f04-d9e7-4bca-b221-dfaf322e1ea0","Type":"ContainerStarted","Data":"422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.592662 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" event={"ID":"7c089f04-d9e7-4bca-b221-dfaf322e1ea0","Type":"ContainerStarted","Data":"c671bbddcf6e0b14d8879425a78187cf06b898e3528d85b13b68c86e28dbabde"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.599991 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.600067 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.600095 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.600311 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.600335 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:02Z","lastTransitionTime":"2026-01-27T21:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.615208 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.637042 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.650499 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.674241 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.687646 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.703958 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.704031 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.704050 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.704103 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.704130 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:02Z","lastTransitionTime":"2026-01-27T21:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.705770 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.718679 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.731486 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.746967 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.759684 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.781774 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.793507 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.807242 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.807286 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.807295 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.807309 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.807319 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:02Z","lastTransitionTime":"2026-01-27T21:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.807698 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.819528 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.839540 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:00Z\\\",\\\"message\\\":\\\"ontroller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z]\\\\nI0127 21:48:00.516127 6263 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 18.76µs\\\\nI0127 21:48:00.516111 6263 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Pr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.909397 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.909457 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.909474 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.909491 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.909504 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:02Z","lastTransitionTime":"2026-01-27T21:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.933234 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.933369 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:02 crc kubenswrapper[4803]: E0127 21:48:02.933463 4803 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:48:02 crc kubenswrapper[4803]: E0127 21:48:02.933461 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:48:18.933425521 +0000 UTC m=+51.349447240 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.933552 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.933591 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:02 crc kubenswrapper[4803]: E0127 21:48:02.933822 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:48:02 crc kubenswrapper[4803]: E0127 21:48:02.933840 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:48:02 crc kubenswrapper[4803]: E0127 21:48:02.933874 4803 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:02 crc kubenswrapper[4803]: E0127 21:48:02.933920 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:18.933909993 +0000 UTC m=+51.349931702 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:02 crc kubenswrapper[4803]: E0127 21:48:02.933944 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:18.933936654 +0000 UTC m=+51.349958363 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:48:02 crc kubenswrapper[4803]: E0127 21:48:02.934022 4803 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:48:02 crc kubenswrapper[4803]: E0127 21:48:02.934093 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:18.934071787 +0000 UTC m=+51.350093516 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.965813 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-72wq6"] Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.966244 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:02 crc kubenswrapper[4803]: E0127 21:48:02.966300 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:02 crc kubenswrapper[4803]: I0127 21:48:02.985385 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.012079 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.012117 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.012126 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.012141 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.012150 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.013114 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:00Z\\\",\\\"message\\\":\\\"ontroller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z]\\\\nI0127 21:48:00.516127 6263 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 18.76µs\\\\nI0127 21:48:00.516111 6263 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Pr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.032079 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.034231 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8vn\" (UniqueName: \"kubernetes.io/projected/0d757da7-4079-4a7a-806d-560834fe95ae-kube-api-access-zc8vn\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.034303 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.034327 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.034425 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.034445 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.034455 4803 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.034501 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:19.034489452 +0000 UTC m=+51.450511151 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.045223 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.065338 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.076023 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.091378 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.106241 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.114724 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.114779 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.114797 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.114823 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.114866 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.121877 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.132417 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.134868 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.134922 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc8vn\" (UniqueName: \"kubernetes.io/projected/0d757da7-4079-4a7a-806d-560834fe95ae-kube-api-access-zc8vn\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.135060 4803 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.135131 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs podName:0d757da7-4079-4a7a-806d-560834fe95ae nodeName:}" failed. No retries permitted until 2026-01-27 21:48:03.635112802 +0000 UTC m=+36.051134511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs") pod "network-metrics-daemon-72wq6" (UID: "0d757da7-4079-4a7a-806d-560834fe95ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.143151 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.151587 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc8vn\" (UniqueName: \"kubernetes.io/projected/0d757da7-4079-4a7a-806d-560834fe95ae-kube-api-access-zc8vn\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.154551 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.167373 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.181081 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.193002 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.210434 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.216880 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.216950 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.216966 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.216982 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.216993 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.263940 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:50:03.179915675 +0000 UTC Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.306440 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.306611 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.306761 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.306969 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.306981 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.307344 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.319046 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.319095 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.319105 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.319121 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.319132 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.421552 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.421596 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.421611 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.421627 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.421639 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.523720 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.523801 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.523895 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.523925 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.523946 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.627129 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.627212 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.627235 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.627267 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.627290 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.638836 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.638989 4803 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.639043 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs podName:0d757da7-4079-4a7a-806d-560834fe95ae nodeName:}" failed. No retries permitted until 2026-01-27 21:48:04.639029603 +0000 UTC m=+37.055051302 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs") pod "network-metrics-daemon-72wq6" (UID: "0d757da7-4079-4a7a-806d-560834fe95ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.663364 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.663444 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.663464 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.663553 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.663573 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.682100 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.687462 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.687519 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.687532 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.687565 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.687580 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.700552 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.705565 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.705624 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.705638 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.705658 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.705672 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.717341 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.721366 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.721443 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.721464 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.721495 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.721515 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.738101 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.742485 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.742535 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.742550 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.742574 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.742592 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.754705 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:03Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:03 crc kubenswrapper[4803]: E0127 21:48:03.754816 4803 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.757409 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.757457 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.757468 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.757489 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.757503 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.860147 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.860234 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.860254 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.860298 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.860342 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.963284 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.963323 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.963335 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.963350 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:03 crc kubenswrapper[4803]: I0127 21:48:03.963359 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:03Z","lastTransitionTime":"2026-01-27T21:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.066261 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.066314 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.066332 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.066357 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.066376 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:04Z","lastTransitionTime":"2026-01-27T21:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.169267 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.169713 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.169725 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.169743 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.169757 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:04Z","lastTransitionTime":"2026-01-27T21:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.264808 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:03:14.21534158 +0000 UTC Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.272913 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.272998 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.273023 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.273056 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.273097 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:04Z","lastTransitionTime":"2026-01-27T21:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.377167 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.377230 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.377248 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.377273 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.377287 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:04Z","lastTransitionTime":"2026-01-27T21:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.481680 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.482116 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.482309 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.482522 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.482714 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:04Z","lastTransitionTime":"2026-01-27T21:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.586957 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.587020 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.587040 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.587067 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.587091 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:04Z","lastTransitionTime":"2026-01-27T21:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.651504 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:04 crc kubenswrapper[4803]: E0127 21:48:04.651773 4803 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:04 crc kubenswrapper[4803]: E0127 21:48:04.651932 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs podName:0d757da7-4079-4a7a-806d-560834fe95ae nodeName:}" failed. No retries permitted until 2026-01-27 21:48:06.651896998 +0000 UTC m=+39.067918727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs") pod "network-metrics-daemon-72wq6" (UID: "0d757da7-4079-4a7a-806d-560834fe95ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.691010 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.691077 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.691095 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.691119 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.691136 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:04Z","lastTransitionTime":"2026-01-27T21:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.794826 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.794921 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.794943 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.794973 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.794995 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:04Z","lastTransitionTime":"2026-01-27T21:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.898465 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.898523 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.898539 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.898562 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:04 crc kubenswrapper[4803]: I0127 21:48:04.898578 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:04Z","lastTransitionTime":"2026-01-27T21:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.001334 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.001710 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.001916 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.002157 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.002349 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:05Z","lastTransitionTime":"2026-01-27T21:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.105906 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.105980 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.106005 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.106036 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.106059 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:05Z","lastTransitionTime":"2026-01-27T21:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.208255 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.208292 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.208302 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.208315 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.208326 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:05Z","lastTransitionTime":"2026-01-27T21:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.265240 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:13:38.248951391 +0000 UTC Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.306691 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.306757 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.306757 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:05 crc kubenswrapper[4803]: E0127 21:48:05.306879 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:05 crc kubenswrapper[4803]: E0127 21:48:05.306919 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:05 crc kubenswrapper[4803]: E0127 21:48:05.306977 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.307463 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:05 crc kubenswrapper[4803]: E0127 21:48:05.307768 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.310264 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.310293 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.310303 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.310317 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.310328 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:05Z","lastTransitionTime":"2026-01-27T21:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.413573 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.413624 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.413635 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.413654 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.413666 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:05Z","lastTransitionTime":"2026-01-27T21:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.516492 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.516574 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.516595 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.516623 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.516675 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:05Z","lastTransitionTime":"2026-01-27T21:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.619918 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.619990 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.620075 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.620122 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.620146 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:05Z","lastTransitionTime":"2026-01-27T21:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.723336 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.723400 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.723419 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.723444 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.723463 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:05Z","lastTransitionTime":"2026-01-27T21:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.826143 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.826205 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.826226 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.826250 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.826267 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:05Z","lastTransitionTime":"2026-01-27T21:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.929309 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.929379 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.929396 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.929421 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:05 crc kubenswrapper[4803]: I0127 21:48:05.929441 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:05Z","lastTransitionTime":"2026-01-27T21:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.032973 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.033038 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.033063 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.033093 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.033116 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:06Z","lastTransitionTime":"2026-01-27T21:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.136211 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.136248 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.136258 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.136271 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.136280 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:06Z","lastTransitionTime":"2026-01-27T21:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.238992 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.239036 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.239048 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.239064 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.239076 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:06Z","lastTransitionTime":"2026-01-27T21:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.250380 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.265755 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:18:17.095326797 +0000 UTC Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.266282 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.290605 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.311987 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.335826 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.341153 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.341188 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.341200 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.341214 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.341224 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:06Z","lastTransitionTime":"2026-01-27T21:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.348701 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.365241 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:00Z\\\",\\\"message\\\":\\\"ontroller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z]\\\\nI0127 21:48:00.516127 6263 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 18.76µs\\\\nI0127 21:48:00.516111 6263 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Pr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.375277 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.393449 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.404513 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.418785 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.428169 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.439819 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.443370 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.443410 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.443420 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.443435 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.443444 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:06Z","lastTransitionTime":"2026-01-27T21:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.455784 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.472167 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.487022 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.540443 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.547201 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.547294 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.547310 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.547648 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.547697 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:06Z","lastTransitionTime":"2026-01-27T21:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.649995 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.650041 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.650054 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.650072 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.650085 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:06Z","lastTransitionTime":"2026-01-27T21:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.671665 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:06 crc kubenswrapper[4803]: E0127 21:48:06.671799 4803 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:06 crc kubenswrapper[4803]: E0127 21:48:06.671920 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs podName:0d757da7-4079-4a7a-806d-560834fe95ae nodeName:}" failed. No retries permitted until 2026-01-27 21:48:10.671901886 +0000 UTC m=+43.087923595 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs") pod "network-metrics-daemon-72wq6" (UID: "0d757da7-4079-4a7a-806d-560834fe95ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.752761 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.752828 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.752865 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.752889 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.752906 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:06Z","lastTransitionTime":"2026-01-27T21:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.855248 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.855323 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.855342 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.855367 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.855387 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:06Z","lastTransitionTime":"2026-01-27T21:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.957299 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.957349 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.957360 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.957377 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:06 crc kubenswrapper[4803]: I0127 21:48:06.957393 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:06Z","lastTransitionTime":"2026-01-27T21:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.060431 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.060485 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.060499 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.060520 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.060534 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:07Z","lastTransitionTime":"2026-01-27T21:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.163064 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.163148 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.163173 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.163207 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.163231 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:07Z","lastTransitionTime":"2026-01-27T21:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.265927 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:43:53.20901193 +0000 UTC Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.266918 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.266966 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.266982 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.267002 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.267015 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:07Z","lastTransitionTime":"2026-01-27T21:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.305912 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.305966 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.306027 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.305912 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:07 crc kubenswrapper[4803]: E0127 21:48:07.306090 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:07 crc kubenswrapper[4803]: E0127 21:48:07.306216 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:07 crc kubenswrapper[4803]: E0127 21:48:07.306369 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:07 crc kubenswrapper[4803]: E0127 21:48:07.306540 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.369910 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.369958 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.369976 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.369999 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.370021 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:07Z","lastTransitionTime":"2026-01-27T21:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.473163 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.473225 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.473242 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.473265 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.473284 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:07Z","lastTransitionTime":"2026-01-27T21:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.576626 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.576684 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.576703 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.576725 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.576742 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:07Z","lastTransitionTime":"2026-01-27T21:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.679545 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.679600 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.679617 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.679643 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.679662 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:07Z","lastTransitionTime":"2026-01-27T21:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.783730 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.783787 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.783803 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.783828 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.783875 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:07Z","lastTransitionTime":"2026-01-27T21:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.886834 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.886937 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.886956 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.886981 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.886999 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:07Z","lastTransitionTime":"2026-01-27T21:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.989703 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.989774 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.989795 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.989818 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:07 crc kubenswrapper[4803]: I0127 21:48:07.989836 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:07Z","lastTransitionTime":"2026-01-27T21:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.093176 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.093248 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.093268 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.093295 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.093314 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:08Z","lastTransitionTime":"2026-01-27T21:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.196748 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.196815 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.196834 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.196883 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.196904 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:08Z","lastTransitionTime":"2026-01-27T21:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.266708 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:52:16.171773891 +0000 UTC Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.300474 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.300534 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.300551 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.300611 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.300629 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:08Z","lastTransitionTime":"2026-01-27T21:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.338204 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:00Z\\\",\\\"message\\\":\\\"ontroller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z]\\\\nI0127 21:48:00.516127 6263 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 18.76µs\\\\nI0127 21:48:00.516111 6263 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Pr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.363294 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.382774 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.403832 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.404740 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.404798 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.404817 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.404888 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.404914 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:08Z","lastTransitionTime":"2026-01-27T21:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.430542 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.447119 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.467910 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.489402 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.508348 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.508496 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.508521 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.508554 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.508575 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:08Z","lastTransitionTime":"2026-01-27T21:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.511150 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.529729 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.550675 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.566527 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.590412 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.611409 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.612407 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.612508 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.612537 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.612573 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.612605 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:08Z","lastTransitionTime":"2026-01-27T21:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.628175 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.644867 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:08Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.715149 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.715184 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.715194 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.715209 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.715219 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:08Z","lastTransitionTime":"2026-01-27T21:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.817669 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.817719 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.817738 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.817763 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.817781 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:08Z","lastTransitionTime":"2026-01-27T21:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.926565 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.926632 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.926652 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.926677 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:08 crc kubenswrapper[4803]: I0127 21:48:08.926699 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:08Z","lastTransitionTime":"2026-01-27T21:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.029717 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.029766 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.029778 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.029794 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.029806 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:09Z","lastTransitionTime":"2026-01-27T21:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.134009 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.134110 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.134129 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.134154 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.134172 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:09Z","lastTransitionTime":"2026-01-27T21:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.237794 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.238350 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.238516 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.238680 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.238809 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:09Z","lastTransitionTime":"2026-01-27T21:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.267450 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:54:28.369741077 +0000 UTC Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.306329 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.306351 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.306344 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.306343 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:09 crc kubenswrapper[4803]: E0127 21:48:09.306454 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:09 crc kubenswrapper[4803]: E0127 21:48:09.306611 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:09 crc kubenswrapper[4803]: E0127 21:48:09.306632 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:09 crc kubenswrapper[4803]: E0127 21:48:09.306685 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.342167 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.342208 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.342222 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.342242 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.342257 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:09Z","lastTransitionTime":"2026-01-27T21:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.446597 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.446689 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.446710 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.446769 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.446788 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:09Z","lastTransitionTime":"2026-01-27T21:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.550710 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.550780 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.550797 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.550823 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.550840 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:09Z","lastTransitionTime":"2026-01-27T21:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.654020 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.654086 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.654107 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.654134 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.654152 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:09Z","lastTransitionTime":"2026-01-27T21:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.757366 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.757422 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.757435 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.757454 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.757467 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:09Z","lastTransitionTime":"2026-01-27T21:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.860251 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.860318 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.860328 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.860341 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.860349 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:09Z","lastTransitionTime":"2026-01-27T21:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.964394 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.964494 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.964515 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.964549 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:09 crc kubenswrapper[4803]: I0127 21:48:09.964571 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:09Z","lastTransitionTime":"2026-01-27T21:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.068925 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.069005 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.069025 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.069053 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.069073 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:10Z","lastTransitionTime":"2026-01-27T21:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.173168 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.173251 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.173279 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.173318 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.173344 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:10Z","lastTransitionTime":"2026-01-27T21:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.268481 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:55:45.890374792 +0000 UTC Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.276552 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.276609 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.276629 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.276658 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.276677 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:10Z","lastTransitionTime":"2026-01-27T21:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.380076 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.380156 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.380177 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.380206 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.380228 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:10Z","lastTransitionTime":"2026-01-27T21:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.483925 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.484039 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.484070 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.484111 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.484142 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:10Z","lastTransitionTime":"2026-01-27T21:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.588072 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.588158 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.588182 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.588214 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.588237 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:10Z","lastTransitionTime":"2026-01-27T21:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.691978 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.692063 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.692088 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.692124 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.692157 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:10Z","lastTransitionTime":"2026-01-27T21:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.715238 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:10 crc kubenswrapper[4803]: E0127 21:48:10.715525 4803 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:10 crc kubenswrapper[4803]: E0127 21:48:10.715689 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs podName:0d757da7-4079-4a7a-806d-560834fe95ae nodeName:}" failed. No retries permitted until 2026-01-27 21:48:18.715650103 +0000 UTC m=+51.131671922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs") pod "network-metrics-daemon-72wq6" (UID: "0d757da7-4079-4a7a-806d-560834fe95ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.795078 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.795152 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.795163 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.795180 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.795213 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:10Z","lastTransitionTime":"2026-01-27T21:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.898471 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.898541 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.898562 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.898591 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:10 crc kubenswrapper[4803]: I0127 21:48:10.898609 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:10Z","lastTransitionTime":"2026-01-27T21:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.002111 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.002183 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.002200 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.002226 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.002244 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:11Z","lastTransitionTime":"2026-01-27T21:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.105651 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.105725 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.105743 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.105769 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.105789 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:11Z","lastTransitionTime":"2026-01-27T21:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.208697 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.208765 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.208786 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.208809 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.208827 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:11Z","lastTransitionTime":"2026-01-27T21:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.269310 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:05:48.495166099 +0000 UTC Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.306606 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.306681 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.306723 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.306641 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:11 crc kubenswrapper[4803]: E0127 21:48:11.306933 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:11 crc kubenswrapper[4803]: E0127 21:48:11.307037 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:11 crc kubenswrapper[4803]: E0127 21:48:11.307147 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:11 crc kubenswrapper[4803]: E0127 21:48:11.307292 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.311543 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.311608 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.311627 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.311653 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.311674 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:11Z","lastTransitionTime":"2026-01-27T21:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.414898 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.414958 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.414972 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.414995 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.415012 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:11Z","lastTransitionTime":"2026-01-27T21:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.517346 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.517394 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.517405 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.517423 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.517435 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:11Z","lastTransitionTime":"2026-01-27T21:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.621841 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.622023 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.622049 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.622073 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.622091 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:11Z","lastTransitionTime":"2026-01-27T21:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.725594 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.725665 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.725689 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.725722 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.725744 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:11Z","lastTransitionTime":"2026-01-27T21:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.829223 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.829279 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.829294 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.829316 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.829329 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:11Z","lastTransitionTime":"2026-01-27T21:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.931715 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.931765 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.931778 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.931792 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:11 crc kubenswrapper[4803]: I0127 21:48:11.931801 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:11Z","lastTransitionTime":"2026-01-27T21:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.034918 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.034967 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.034978 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.034995 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.035006 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:12Z","lastTransitionTime":"2026-01-27T21:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.137890 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.137938 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.137950 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.137968 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.137980 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:12Z","lastTransitionTime":"2026-01-27T21:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.242477 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.242532 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.242543 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.242560 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.242573 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:12Z","lastTransitionTime":"2026-01-27T21:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.269491 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 10:38:53.852059778 +0000 UTC Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.345922 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.345995 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.346015 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.346040 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.346066 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:12Z","lastTransitionTime":"2026-01-27T21:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.449089 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.449137 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.449152 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.449173 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.449187 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:12Z","lastTransitionTime":"2026-01-27T21:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.551762 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.551839 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.551901 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.551929 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.551947 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:12Z","lastTransitionTime":"2026-01-27T21:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.654760 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.654830 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.654896 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.654931 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.654955 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:12Z","lastTransitionTime":"2026-01-27T21:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.757750 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.757795 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.757803 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.757817 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.757826 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:12Z","lastTransitionTime":"2026-01-27T21:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.860712 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.860762 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.860783 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.860807 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.860826 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:12Z","lastTransitionTime":"2026-01-27T21:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.964122 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.964203 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.964229 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.964264 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:12 crc kubenswrapper[4803]: I0127 21:48:12.964287 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:12Z","lastTransitionTime":"2026-01-27T21:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.067578 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.067653 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.067678 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.067717 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.067740 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:13Z","lastTransitionTime":"2026-01-27T21:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.171157 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.171248 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.171273 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.171308 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.171332 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:13Z","lastTransitionTime":"2026-01-27T21:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.270659 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:35:45.747677541 +0000 UTC Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.274467 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.275157 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.275200 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.275224 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.275239 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:13Z","lastTransitionTime":"2026-01-27T21:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.306035 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.306187 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.306518 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.306583 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:13 crc kubenswrapper[4803]: E0127 21:48:13.306934 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:13 crc kubenswrapper[4803]: E0127 21:48:13.307072 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:13 crc kubenswrapper[4803]: E0127 21:48:13.307106 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.307185 4803 scope.go:117] "RemoveContainer" containerID="f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b" Jan 27 21:48:13 crc kubenswrapper[4803]: E0127 21:48:13.307183 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.378188 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.378216 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.378225 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.378242 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.378252 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:13Z","lastTransitionTime":"2026-01-27T21:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.482287 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.482334 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.482345 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.482364 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.482375 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:13Z","lastTransitionTime":"2026-01-27T21:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.587191 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.587455 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.587518 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.587593 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.587655 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:13Z","lastTransitionTime":"2026-01-27T21:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.632953 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/1.log" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.635725 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c"} Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.635870 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.648618 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.660655 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.676943 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.690439 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.690472 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.690486 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.690502 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.690513 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:13Z","lastTransitionTime":"2026-01-27T21:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.693980 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.707239 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.723798 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.738895 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.752489 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.765303 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.777614 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.792988 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.793020 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.793029 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.793042 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.793052 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:13Z","lastTransitionTime":"2026-01-27T21:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.794131 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:00Z\\\",\\\"message\\\":\\\"ontroller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z]\\\\nI0127 21:48:00.516127 6263 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 18.76µs\\\\nI0127 21:48:00.516111 6263 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Pr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.804368 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.812638 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.822310 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.838485 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.850281 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:13Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.895053 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.895095 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.895107 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.895123 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.895134 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:13Z","lastTransitionTime":"2026-01-27T21:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.998035 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.998086 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.998099 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.998117 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:13 crc kubenswrapper[4803]: I0127 21:48:13.998129 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:13Z","lastTransitionTime":"2026-01-27T21:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.052183 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.052223 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.052233 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.052250 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.052261 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: E0127 21:48:14.063261 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.066714 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.066763 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.066777 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.066803 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.066817 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: E0127 21:48:14.086989 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.091282 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.091327 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.091337 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.091351 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.091360 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: E0127 21:48:14.103173 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.106522 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.106559 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.106570 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.106586 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.106597 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: E0127 21:48:14.118870 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.122292 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.122327 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.122336 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.122352 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.122362 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: E0127 21:48:14.133967 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: E0127 21:48:14.134081 4803 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.135616 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.135646 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.135655 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.135670 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.135681 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.238348 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.238389 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.238401 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.238418 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.238449 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.271168 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 12:25:20.534366212 +0000 UTC Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.340830 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.340907 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.340920 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.340937 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.340948 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.443677 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.443755 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.443782 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.443807 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.443826 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.548307 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.548429 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.548452 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.548484 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.548506 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.642025 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/2.log" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.643094 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/1.log" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.646150 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c" exitCode=1 Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.646260 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c"} Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.646334 4803 scope.go:117] "RemoveContainer" containerID="f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.647538 4803 scope.go:117] "RemoveContainer" containerID="6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c" Jan 27 21:48:14 crc kubenswrapper[4803]: E0127 21:48:14.647917 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.651359 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.651512 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.651596 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.651719 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.651824 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.678668 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8ad734c9338d5a42e5fbdb52378830517791c0934e9dfaee41905fa7375bc0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:00Z\\\",\\\"message\\\":\\\"ontroller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:00Z is after 2025-08-24T17:21:41Z]\\\\nI0127 21:48:00.516127 6263 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 18.76µs\\\\nI0127 21:48:00.516111 6263 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Pr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150112 6467 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150147 6467 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kube\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.696790 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.712034 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.727590 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.746638 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.754544 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.754586 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.754601 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.754621 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.754635 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.760350 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.775543 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.791518 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.803210 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.815386 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.835710 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.852627 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.857817 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.857893 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.857908 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.857929 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.857943 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.871272 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.887553 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.902162 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.915055 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:14Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.960726 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.960791 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.960808 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.960833 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:14 crc kubenswrapper[4803]: I0127 21:48:14.960882 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:14Z","lastTransitionTime":"2026-01-27T21:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.063837 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.063935 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.063951 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.063970 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.063983 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:15Z","lastTransitionTime":"2026-01-27T21:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.166325 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.166385 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.166403 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.166427 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.166447 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:15Z","lastTransitionTime":"2026-01-27T21:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.269180 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.269221 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.269235 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.269251 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.269263 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:15Z","lastTransitionTime":"2026-01-27T21:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.272138 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:03:35.354134324 +0000 UTC Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.306587 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.306604 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.306606 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.306727 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:15 crc kubenswrapper[4803]: E0127 21:48:15.306894 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:15 crc kubenswrapper[4803]: E0127 21:48:15.307106 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:15 crc kubenswrapper[4803]: E0127 21:48:15.307231 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:15 crc kubenswrapper[4803]: E0127 21:48:15.307320 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.371974 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.372021 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.372040 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.372096 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.372114 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:15Z","lastTransitionTime":"2026-01-27T21:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.474283 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.474438 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.474463 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.474511 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.474536 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:15Z","lastTransitionTime":"2026-01-27T21:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.577799 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.577919 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.577946 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.577972 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.577991 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:15Z","lastTransitionTime":"2026-01-27T21:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.652054 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/2.log" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.680453 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.680514 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.680531 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.680555 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.680573 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:15Z","lastTransitionTime":"2026-01-27T21:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.784303 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.784575 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.784604 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.784644 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.784677 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:15Z","lastTransitionTime":"2026-01-27T21:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.888697 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.888799 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.888818 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.888888 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.888919 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:15Z","lastTransitionTime":"2026-01-27T21:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.991753 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.991807 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.991819 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.991841 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:15 crc kubenswrapper[4803]: I0127 21:48:15.991907 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:15Z","lastTransitionTime":"2026-01-27T21:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.095388 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.095457 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.095484 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.095522 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.095545 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:16Z","lastTransitionTime":"2026-01-27T21:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.198544 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.198657 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.198676 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.198709 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.198730 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:16Z","lastTransitionTime":"2026-01-27T21:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.273238 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:16:22.95400212 +0000 UTC Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.301921 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.302015 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.302030 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.302050 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.302061 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:16Z","lastTransitionTime":"2026-01-27T21:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.405643 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.405716 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.405736 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.405766 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.405785 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:16Z","lastTransitionTime":"2026-01-27T21:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.509046 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.509117 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.509141 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.509177 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.509204 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:16Z","lastTransitionTime":"2026-01-27T21:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.612424 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.612525 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.612547 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.612576 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.612601 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:16Z","lastTransitionTime":"2026-01-27T21:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.716335 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.716411 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.716434 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.716471 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.716495 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:16Z","lastTransitionTime":"2026-01-27T21:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.820807 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.820884 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.820899 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.820921 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.820936 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:16Z","lastTransitionTime":"2026-01-27T21:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.924395 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.924478 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.924498 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.924536 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:16 crc kubenswrapper[4803]: I0127 21:48:16.924558 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:16Z","lastTransitionTime":"2026-01-27T21:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.028618 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.028705 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.028726 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.028756 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.028775 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:17Z","lastTransitionTime":"2026-01-27T21:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.131220 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.131290 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.131330 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.131349 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.131360 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:17Z","lastTransitionTime":"2026-01-27T21:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.233625 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.233670 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.233750 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.233773 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.233785 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:17Z","lastTransitionTime":"2026-01-27T21:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.273841 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:38:51.817811423 +0000 UTC Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.306565 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.306623 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.306591 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.306702 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:17 crc kubenswrapper[4803]: E0127 21:48:17.306761 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:17 crc kubenswrapper[4803]: E0127 21:48:17.306988 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:17 crc kubenswrapper[4803]: E0127 21:48:17.307115 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:17 crc kubenswrapper[4803]: E0127 21:48:17.307254 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.336758 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.336782 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.336790 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.336802 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.336811 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:17Z","lastTransitionTime":"2026-01-27T21:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.439398 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.439449 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.439458 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.439470 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.439479 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:17Z","lastTransitionTime":"2026-01-27T21:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.464145 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.464821 4803 scope.go:117] "RemoveContainer" containerID="6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c" Jan 27 21:48:17 crc kubenswrapper[4803]: E0127 21:48:17.465014 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.486028 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.517750 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150112 6467 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150147 6467 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kube\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.542293 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.542375 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.542398 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.542430 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.542454 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:17Z","lastTransitionTime":"2026-01-27T21:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.547127 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.567719 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.588066 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.606029 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.627327 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.645373 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.645437 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.645453 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.645477 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.645495 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:17Z","lastTransitionTime":"2026-01-27T21:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.654138 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.679594 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.698630 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.722452 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.743077 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.748391 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.748464 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.748490 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.748524 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.748550 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:17Z","lastTransitionTime":"2026-01-27T21:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.765585 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.791237 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.811724 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.835141 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:17Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.852480 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.852563 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.852582 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.852611 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.852635 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:17Z","lastTransitionTime":"2026-01-27T21:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.955526 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.955697 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.955723 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.955749 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:17 crc kubenswrapper[4803]: I0127 21:48:17.955767 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:17Z","lastTransitionTime":"2026-01-27T21:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.058798 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.059337 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.059706 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.059973 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.060199 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:18Z","lastTransitionTime":"2026-01-27T21:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.163103 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.163619 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.163771 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.163969 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.164161 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:18Z","lastTransitionTime":"2026-01-27T21:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.185786 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.203379 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.209813 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.226623 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.244245 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.265089 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.267687 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.267869 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.267966 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.268095 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.268211 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:18Z","lastTransitionTime":"2026-01-27T21:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.274050 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:24:56.246365986 +0000 UTC Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.285338 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.306619 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.328303 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.350371 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.371776 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.371894 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.371919 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.371949 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.371974 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:18Z","lastTransitionTime":"2026-01-27T21:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.376742 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.395566 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.428972 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.449398 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.474709 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.475456 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.475545 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.475570 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.475600 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.475623 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:18Z","lastTransitionTime":"2026-01-27T21:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.495499 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.520187 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.558114 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150112 6467 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150147 6467 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kube\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.578689 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.578737 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.578750 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.578769 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.578781 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:18Z","lastTransitionTime":"2026-01-27T21:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.580022 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.598908 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.618789 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.641017 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.660608 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.683004 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.683088 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.683106 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.683134 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.683154 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:18Z","lastTransitionTime":"2026-01-27T21:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.686239 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.706379 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97826e-c50d-4cda-b3ce-56bbf0e97f6a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db7e62956ef3526e02fdb5bc208185103cfbe40b86346dc993fb956bdb15cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ffe7f19851c6226af442882ecaa7514cc38d6bd1467881cbb700190fb58cd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.730098 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.748668 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.761648 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.778617 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.786221 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.786270 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.786287 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.786309 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.786327 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:18Z","lastTransitionTime":"2026-01-27T21:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.810391 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:18 crc kubenswrapper[4803]: E0127 21:48:18.810621 4803 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:18 crc kubenswrapper[4803]: E0127 21:48:18.810703 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs podName:0d757da7-4079-4a7a-806d-560834fe95ae nodeName:}" failed. No retries permitted until 2026-01-27 21:48:34.810680788 +0000 UTC m=+67.226702497 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs") pod "network-metrics-daemon-72wq6" (UID: "0d757da7-4079-4a7a-806d-560834fe95ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.814077 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150112 6467 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150147 6467 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kube\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.837042 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.856501 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.876748 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.890210 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.890294 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.890313 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.890343 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.890366 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:18Z","lastTransitionTime":"2026-01-27T21:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.901191 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.918191 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:18Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.993493 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.993620 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.993648 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.993808 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:18 crc kubenswrapper[4803]: I0127 21:48:18.993888 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:18Z","lastTransitionTime":"2026-01-27T21:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.012391 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.012640 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.012680 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:48:51.012637413 +0000 UTC m=+83.428659152 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.012766 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.012806 4803 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.012960 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:51.01292809 +0000 UTC m=+83.428950019 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.013052 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.013100 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.013121 4803 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.013185 4803 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.013211 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:51.013180437 +0000 UTC m=+83.429202166 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.013067 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.013259 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:51.013243758 +0000 UTC m=+83.429265487 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.098485 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.098560 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.098580 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.098611 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.098631 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:19Z","lastTransitionTime":"2026-01-27T21:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.114761 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.115100 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.115154 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.115177 4803 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.115283 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 21:48:51.115253139 +0000 UTC m=+83.531274868 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.202613 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.202709 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.202746 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.202788 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.202810 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:19Z","lastTransitionTime":"2026-01-27T21:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.274380 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:41:50.474758821 +0000 UTC Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.306227 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.306454 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.306481 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.306239 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.306925 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.306956 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.307218 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.307291 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.307327 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.307390 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.307416 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.307470 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:19Z","lastTransitionTime":"2026-01-27T21:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:19 crc kubenswrapper[4803]: E0127 21:48:19.307664 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.412535 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.412595 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.412606 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.412626 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.412665 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:19Z","lastTransitionTime":"2026-01-27T21:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.516579 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.516657 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.516677 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.516707 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.516727 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:19Z","lastTransitionTime":"2026-01-27T21:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.621291 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.621900 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.622073 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.622232 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.622367 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:19Z","lastTransitionTime":"2026-01-27T21:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.725835 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.725932 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.725950 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.725976 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.725997 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:19Z","lastTransitionTime":"2026-01-27T21:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.829382 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.829455 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.829476 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.829507 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.829530 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:19Z","lastTransitionTime":"2026-01-27T21:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.932189 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.932623 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.932796 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.932999 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:19 crc kubenswrapper[4803]: I0127 21:48:19.933149 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:19Z","lastTransitionTime":"2026-01-27T21:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.036748 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.036829 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.036891 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.036928 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.036954 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:20Z","lastTransitionTime":"2026-01-27T21:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.141405 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.141484 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.141506 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.141537 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.141561 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:20Z","lastTransitionTime":"2026-01-27T21:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.244400 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.244459 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.244477 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.244502 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.244519 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:20Z","lastTransitionTime":"2026-01-27T21:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.275341 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:00:59.204858999 +0000 UTC Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.349275 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.349333 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.349357 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.349392 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.349417 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:20Z","lastTransitionTime":"2026-01-27T21:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.453090 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.453187 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.453211 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.453247 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.453271 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:20Z","lastTransitionTime":"2026-01-27T21:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.556903 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.556977 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.556997 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.557027 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.557047 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:20Z","lastTransitionTime":"2026-01-27T21:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.660220 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.660311 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.660337 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.660378 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.660404 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:20Z","lastTransitionTime":"2026-01-27T21:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.764628 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.765283 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.765303 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.765331 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.765349 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:20Z","lastTransitionTime":"2026-01-27T21:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.868921 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.868982 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.868999 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.869024 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.869043 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:20Z","lastTransitionTime":"2026-01-27T21:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.972622 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.972735 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.972763 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.972803 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:20 crc kubenswrapper[4803]: I0127 21:48:20.972831 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:20Z","lastTransitionTime":"2026-01-27T21:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.076897 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.076984 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.077003 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.077033 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.077055 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:21Z","lastTransitionTime":"2026-01-27T21:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.180931 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.181017 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.181037 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.181073 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.181095 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:21Z","lastTransitionTime":"2026-01-27T21:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.276153 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 15:25:54.573468721 +0000 UTC Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.283828 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.283916 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.283935 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.283965 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.283985 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:21Z","lastTransitionTime":"2026-01-27T21:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.306645 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.306673 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.306840 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:21 crc kubenswrapper[4803]: E0127 21:48:21.307024 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.307132 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:21 crc kubenswrapper[4803]: E0127 21:48:21.307228 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:21 crc kubenswrapper[4803]: E0127 21:48:21.307340 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:21 crc kubenswrapper[4803]: E0127 21:48:21.307578 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.386912 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.386989 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.387014 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.387048 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.387074 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:21Z","lastTransitionTime":"2026-01-27T21:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.490985 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.491115 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.491147 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.491209 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.491240 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:21Z","lastTransitionTime":"2026-01-27T21:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.593809 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.593940 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.593966 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.593995 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.594021 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:21Z","lastTransitionTime":"2026-01-27T21:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.696795 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.696907 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.696927 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.696957 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.696979 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:21Z","lastTransitionTime":"2026-01-27T21:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.800908 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.800988 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.801014 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.801052 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.801072 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:21Z","lastTransitionTime":"2026-01-27T21:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.904741 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.904824 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.904885 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.904920 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:21 crc kubenswrapper[4803]: I0127 21:48:21.904941 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:21Z","lastTransitionTime":"2026-01-27T21:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.008268 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.008327 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.008354 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.008378 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.008406 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:22Z","lastTransitionTime":"2026-01-27T21:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.112354 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.112444 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.112467 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.112499 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.112523 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:22Z","lastTransitionTime":"2026-01-27T21:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.224980 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.225072 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.225096 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.225128 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.225148 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:22Z","lastTransitionTime":"2026-01-27T21:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.277118 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:45:05.847957923 +0000 UTC Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.328276 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.328353 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.328371 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.328401 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.328422 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:22Z","lastTransitionTime":"2026-01-27T21:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.432881 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.432988 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.433009 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.433048 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.433082 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:22Z","lastTransitionTime":"2026-01-27T21:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.537737 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.537893 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.537923 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.537958 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.537984 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:22Z","lastTransitionTime":"2026-01-27T21:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.641541 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.641651 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.641663 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.641684 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.641698 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:22Z","lastTransitionTime":"2026-01-27T21:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.745940 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.746047 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.746067 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.746092 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.746126 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:22Z","lastTransitionTime":"2026-01-27T21:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.850709 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.850990 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.851088 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.852022 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.852432 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:22Z","lastTransitionTime":"2026-01-27T21:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.955596 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.955649 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.955665 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.955691 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:22 crc kubenswrapper[4803]: I0127 21:48:22.955709 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:22Z","lastTransitionTime":"2026-01-27T21:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.058834 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.058965 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.058985 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.059013 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.059037 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:23Z","lastTransitionTime":"2026-01-27T21:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.162252 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.162331 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.162351 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.162380 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.162400 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:23Z","lastTransitionTime":"2026-01-27T21:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.266066 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.266142 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.266166 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.266203 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.266225 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:23Z","lastTransitionTime":"2026-01-27T21:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.278231 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 05:32:26.063577444 +0000 UTC Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.306089 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.306127 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.306143 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:23 crc kubenswrapper[4803]: E0127 21:48:23.306307 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.306329 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:23 crc kubenswrapper[4803]: E0127 21:48:23.306457 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:23 crc kubenswrapper[4803]: E0127 21:48:23.306824 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:23 crc kubenswrapper[4803]: E0127 21:48:23.307008 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.369472 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.369576 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.369601 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.369625 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.369643 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:23Z","lastTransitionTime":"2026-01-27T21:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.473089 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.473160 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.473179 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.473209 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.473231 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:23Z","lastTransitionTime":"2026-01-27T21:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.576800 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.576889 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.576902 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.576926 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.576941 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:23Z","lastTransitionTime":"2026-01-27T21:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.680389 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.680457 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.680475 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.680501 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.680531 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:23Z","lastTransitionTime":"2026-01-27T21:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.783412 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.783450 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.783461 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.783480 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.783496 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:23Z","lastTransitionTime":"2026-01-27T21:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.886437 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.886650 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.886680 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.886713 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.886733 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:23Z","lastTransitionTime":"2026-01-27T21:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.990286 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.990366 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.990384 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.990414 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:23 crc kubenswrapper[4803]: I0127 21:48:23.990433 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:23Z","lastTransitionTime":"2026-01-27T21:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.094534 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.094673 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.094693 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.094721 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.094738 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.198767 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.198841 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.198895 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.198927 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.198956 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.278396 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:51:23.876678657 +0000 UTC Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.302899 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.302988 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.303005 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.303030 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.303043 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.304762 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.304824 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.304840 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.304919 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.304933 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: E0127 21:48:24.323325 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:24Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.329102 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.329163 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.329176 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.329194 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.329248 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: E0127 21:48:24.356818 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:24Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.362780 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.362859 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.362872 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.362887 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.362897 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: E0127 21:48:24.382902 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:24Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.387402 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.387444 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.387456 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.387473 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.387487 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: E0127 21:48:24.401373 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:24Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.406680 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.406759 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.406780 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.406811 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.406834 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: E0127 21:48:24.427594 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:24Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:24 crc kubenswrapper[4803]: E0127 21:48:24.427985 4803 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.430795 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.430871 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.430888 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.430912 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.430927 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.533829 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.533911 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.533923 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.533965 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.533980 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.636714 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.636828 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.636866 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.636884 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.636896 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.739951 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.740006 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.740017 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.740031 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.740040 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.843422 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.843456 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.843474 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.843491 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.843502 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.947216 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.947332 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.947368 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.947413 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:24 crc kubenswrapper[4803]: I0127 21:48:24.947439 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:24Z","lastTransitionTime":"2026-01-27T21:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.051271 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.051352 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.051375 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.051406 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.051426 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:25Z","lastTransitionTime":"2026-01-27T21:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.154539 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.154622 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.154643 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.154673 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.154695 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:25Z","lastTransitionTime":"2026-01-27T21:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.258049 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.258139 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.258165 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.258200 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.258226 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:25Z","lastTransitionTime":"2026-01-27T21:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.278917 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:48:24.71144172 +0000 UTC Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.306718 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.306722 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.306737 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.306737 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:25 crc kubenswrapper[4803]: E0127 21:48:25.307029 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:25 crc kubenswrapper[4803]: E0127 21:48:25.307294 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:25 crc kubenswrapper[4803]: E0127 21:48:25.307416 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:25 crc kubenswrapper[4803]: E0127 21:48:25.307577 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.363433 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.363501 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.363517 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.363542 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.363561 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:25Z","lastTransitionTime":"2026-01-27T21:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.467996 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.468053 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.468066 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.468087 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.468100 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:25Z","lastTransitionTime":"2026-01-27T21:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.571947 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.572031 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.572048 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.572074 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.572092 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:25Z","lastTransitionTime":"2026-01-27T21:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.676380 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.676442 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.676461 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.676487 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.676505 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:25Z","lastTransitionTime":"2026-01-27T21:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.779836 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.780128 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.780149 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.780182 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.780202 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:25Z","lastTransitionTime":"2026-01-27T21:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.883507 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.883587 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.883608 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.883638 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.883658 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:25Z","lastTransitionTime":"2026-01-27T21:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.987079 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.987155 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.987174 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.987202 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:25 crc kubenswrapper[4803]: I0127 21:48:25.987222 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:25Z","lastTransitionTime":"2026-01-27T21:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.090893 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.090991 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.091015 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.091049 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.091070 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:26Z","lastTransitionTime":"2026-01-27T21:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.195011 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.195084 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.195104 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.195135 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.195155 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:26Z","lastTransitionTime":"2026-01-27T21:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.280079 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:27:31.122000128 +0000 UTC Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.299270 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.299362 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.299387 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.299423 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.299454 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:26Z","lastTransitionTime":"2026-01-27T21:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.407758 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.408659 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.408714 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.408747 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.408767 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:26Z","lastTransitionTime":"2026-01-27T21:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.512277 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.512329 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.512341 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.512364 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.512378 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:26Z","lastTransitionTime":"2026-01-27T21:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.616479 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.616565 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.616579 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.616602 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.616617 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:26Z","lastTransitionTime":"2026-01-27T21:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.720341 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.720405 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.720423 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.720446 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.720468 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:26Z","lastTransitionTime":"2026-01-27T21:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.824009 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.824077 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.824097 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.824124 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.824145 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:26Z","lastTransitionTime":"2026-01-27T21:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.929831 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.929898 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.929907 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.929922 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:26 crc kubenswrapper[4803]: I0127 21:48:26.929933 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:26Z","lastTransitionTime":"2026-01-27T21:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.033271 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.033342 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.033353 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.033373 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.033385 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:27Z","lastTransitionTime":"2026-01-27T21:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.136934 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.136976 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.136985 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.137001 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.137011 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:27Z","lastTransitionTime":"2026-01-27T21:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.239596 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.239648 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.239661 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.239680 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.239695 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:27Z","lastTransitionTime":"2026-01-27T21:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.281106 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:19:09.876790078 +0000 UTC Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.306560 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.306597 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.306641 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.306569 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:27 crc kubenswrapper[4803]: E0127 21:48:27.306803 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:27 crc kubenswrapper[4803]: E0127 21:48:27.306912 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:27 crc kubenswrapper[4803]: E0127 21:48:27.306994 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:27 crc kubenswrapper[4803]: E0127 21:48:27.307040 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.342747 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.342812 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.342830 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.342905 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.342943 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:27Z","lastTransitionTime":"2026-01-27T21:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.446426 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.446491 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.446504 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.446547 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.446560 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:27Z","lastTransitionTime":"2026-01-27T21:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.549457 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.549554 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.549577 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.549612 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.549635 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:27Z","lastTransitionTime":"2026-01-27T21:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.653586 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.653665 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.653678 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.653699 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.653742 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:27Z","lastTransitionTime":"2026-01-27T21:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.758043 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.758128 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.758144 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.758169 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.758185 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:27Z","lastTransitionTime":"2026-01-27T21:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.862192 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.862278 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.862296 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.862327 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.862347 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:27Z","lastTransitionTime":"2026-01-27T21:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.966441 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.966538 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.966569 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.966610 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:27 crc kubenswrapper[4803]: I0127 21:48:27.966632 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:27Z","lastTransitionTime":"2026-01-27T21:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.070598 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.070691 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.070716 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.070757 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.070784 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:28Z","lastTransitionTime":"2026-01-27T21:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.175114 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.175209 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.175233 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.175265 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.175293 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:28Z","lastTransitionTime":"2026-01-27T21:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.278132 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.278178 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.278191 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.278208 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.278224 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:28Z","lastTransitionTime":"2026-01-27T21:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.281273 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:11:08.22019332 +0000 UTC Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.327912 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.350996 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.367940 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.380715 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.380752 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.380760 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.380778 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.380790 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:28Z","lastTransitionTime":"2026-01-27T21:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.383899 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.398980 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.414566 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.430632 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.441213 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.456162 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.466250 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97826e-c50d-4cda-b3ce-56bbf0e97f6a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db7e62956ef3526e02fdb5bc208185103cfbe40b86346dc993fb956bdb15cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ffe7f19851c6226af442882ecaa7514cc38d6bd1467881cbb700190fb58cd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.479974 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.483731 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.483801 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.483815 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.483839 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.483882 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:28Z","lastTransitionTime":"2026-01-27T21:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.506695 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150112 6467 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150147 6467 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kube\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.528116 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.541270 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.558059 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.572702 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.586881 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.586914 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.586923 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.586937 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.586947 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:28Z","lastTransitionTime":"2026-01-27T21:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.588640 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:28Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.688824 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.688890 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.688909 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.688930 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.688946 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:28Z","lastTransitionTime":"2026-01-27T21:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.791910 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.791988 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.791998 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.792015 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.792026 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:28Z","lastTransitionTime":"2026-01-27T21:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.894637 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.894685 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.894697 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.894713 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.894724 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:28Z","lastTransitionTime":"2026-01-27T21:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.998791 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.998922 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.999012 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.999039 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:28 crc kubenswrapper[4803]: I0127 21:48:28.999060 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:28Z","lastTransitionTime":"2026-01-27T21:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.102433 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.102491 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.102507 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.102528 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.102543 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:29Z","lastTransitionTime":"2026-01-27T21:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.206775 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.206887 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.206915 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.206994 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.207025 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:29Z","lastTransitionTime":"2026-01-27T21:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.282162 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:06:46.303708594 +0000 UTC Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.306579 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.306699 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:29 crc kubenswrapper[4803]: E0127 21:48:29.306762 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.306828 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.306829 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:29 crc kubenswrapper[4803]: E0127 21:48:29.307064 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:29 crc kubenswrapper[4803]: E0127 21:48:29.307143 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:29 crc kubenswrapper[4803]: E0127 21:48:29.307267 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.310206 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.310288 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.310340 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.310365 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.310381 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:29Z","lastTransitionTime":"2026-01-27T21:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.413313 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.413381 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.413405 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.413439 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.413462 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:29Z","lastTransitionTime":"2026-01-27T21:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.517469 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.517571 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.517599 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.517635 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.517663 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:29Z","lastTransitionTime":"2026-01-27T21:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.620754 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.620821 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.620841 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.620898 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.620918 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:29Z","lastTransitionTime":"2026-01-27T21:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.724564 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.724611 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.724623 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.724640 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.724652 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:29Z","lastTransitionTime":"2026-01-27T21:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.828041 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.828123 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.828140 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.828169 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.828191 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:29Z","lastTransitionTime":"2026-01-27T21:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.931615 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.931686 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.931701 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.931725 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:29 crc kubenswrapper[4803]: I0127 21:48:29.931741 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:29Z","lastTransitionTime":"2026-01-27T21:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.034750 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.034832 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.034884 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.034915 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.034935 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:30Z","lastTransitionTime":"2026-01-27T21:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.139584 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.139653 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.139672 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.139697 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.139717 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:30Z","lastTransitionTime":"2026-01-27T21:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.243701 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.243782 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.243802 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.243833 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.243883 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:30Z","lastTransitionTime":"2026-01-27T21:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.282961 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:58:08.292351413 +0000 UTC Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.347485 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.347547 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.347569 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.347595 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.347613 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:30Z","lastTransitionTime":"2026-01-27T21:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.451779 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.451837 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.451884 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.451908 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.451921 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:30Z","lastTransitionTime":"2026-01-27T21:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.554951 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.555002 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.555013 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.555030 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.555041 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:30Z","lastTransitionTime":"2026-01-27T21:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.658938 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.659567 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.659587 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.659618 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.659639 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:30Z","lastTransitionTime":"2026-01-27T21:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.763730 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.763805 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.763823 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.763882 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.763903 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:30Z","lastTransitionTime":"2026-01-27T21:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.867652 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.867716 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.867729 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.867749 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.867762 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:30Z","lastTransitionTime":"2026-01-27T21:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.971244 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.971303 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.971313 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.971331 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:30 crc kubenswrapper[4803]: I0127 21:48:30.971344 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:30Z","lastTransitionTime":"2026-01-27T21:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.075309 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.075378 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.075398 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.075424 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.075442 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:31Z","lastTransitionTime":"2026-01-27T21:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.186710 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.186754 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.186766 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.186784 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.186797 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:31Z","lastTransitionTime":"2026-01-27T21:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.283921 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:02:22.866697463 +0000 UTC Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.289541 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.289595 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.289613 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.289639 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.289657 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:31Z","lastTransitionTime":"2026-01-27T21:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.305831 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:31 crc kubenswrapper[4803]: E0127 21:48:31.305982 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.306158 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:31 crc kubenswrapper[4803]: E0127 21:48:31.306228 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.306607 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:31 crc kubenswrapper[4803]: E0127 21:48:31.306702 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.306761 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:31 crc kubenswrapper[4803]: E0127 21:48:31.306834 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.392928 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.392976 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.392992 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.393014 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.393030 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:31Z","lastTransitionTime":"2026-01-27T21:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.495325 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.495389 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.495408 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.495433 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.495451 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:31Z","lastTransitionTime":"2026-01-27T21:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.598191 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.598246 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.598263 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.598289 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.598312 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:31Z","lastTransitionTime":"2026-01-27T21:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.702021 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.702085 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.702104 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.702138 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.702167 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:31Z","lastTransitionTime":"2026-01-27T21:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.805897 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.805979 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.805995 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.806018 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.806034 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:31Z","lastTransitionTime":"2026-01-27T21:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.909057 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.909129 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.909148 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.909170 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:31 crc kubenswrapper[4803]: I0127 21:48:31.909189 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:31Z","lastTransitionTime":"2026-01-27T21:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.011657 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.011707 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.011718 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.011734 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.011744 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:32Z","lastTransitionTime":"2026-01-27T21:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.114059 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.114097 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.114105 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.114118 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.114127 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:32Z","lastTransitionTime":"2026-01-27T21:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.216324 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.216365 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.216374 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.216397 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.216407 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:32Z","lastTransitionTime":"2026-01-27T21:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.284509 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:37:35.377102377 +0000 UTC Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.306750 4803 scope.go:117] "RemoveContainer" containerID="6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c" Jan 27 21:48:32 crc kubenswrapper[4803]: E0127 21:48:32.307193 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.318385 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.318450 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.318466 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.318489 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.318506 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:32Z","lastTransitionTime":"2026-01-27T21:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.420780 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.420813 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.420822 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.420834 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.420842 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:32Z","lastTransitionTime":"2026-01-27T21:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.523308 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.523365 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.523377 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.523394 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.523406 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:32Z","lastTransitionTime":"2026-01-27T21:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.626593 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.626647 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.626656 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.626668 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.626677 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:32Z","lastTransitionTime":"2026-01-27T21:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.728658 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.728721 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.728739 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.728765 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.728783 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:32Z","lastTransitionTime":"2026-01-27T21:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.831284 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.831336 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.831350 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.831370 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.831383 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:32Z","lastTransitionTime":"2026-01-27T21:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.933874 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.933944 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.933966 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.933991 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:32 crc kubenswrapper[4803]: I0127 21:48:32.934010 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:32Z","lastTransitionTime":"2026-01-27T21:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.038000 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.038061 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.038070 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.038085 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.038094 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:33Z","lastTransitionTime":"2026-01-27T21:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.141722 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.141792 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.141810 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.141841 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.141886 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:33Z","lastTransitionTime":"2026-01-27T21:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.244879 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.244951 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.244966 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.244997 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.245020 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:33Z","lastTransitionTime":"2026-01-27T21:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.284747 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:17:46.033152508 +0000 UTC Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.306275 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.306322 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.306359 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.306328 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:33 crc kubenswrapper[4803]: E0127 21:48:33.306757 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:33 crc kubenswrapper[4803]: E0127 21:48:33.306980 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:33 crc kubenswrapper[4803]: E0127 21:48:33.306936 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:33 crc kubenswrapper[4803]: E0127 21:48:33.306888 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.348036 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.348078 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.348089 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.348108 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.348120 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:33Z","lastTransitionTime":"2026-01-27T21:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.452070 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.452173 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.452194 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.452238 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.452258 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:33Z","lastTransitionTime":"2026-01-27T21:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.555710 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.555799 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.555823 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.555888 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.555907 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:33Z","lastTransitionTime":"2026-01-27T21:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.658914 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.658967 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.658979 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.659001 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.659014 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:33Z","lastTransitionTime":"2026-01-27T21:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.762395 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.762449 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.762463 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.762480 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.762492 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:33Z","lastTransitionTime":"2026-01-27T21:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.865134 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.865167 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.865176 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.865191 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.865200 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:33Z","lastTransitionTime":"2026-01-27T21:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.968055 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.968120 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.968139 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.968166 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:33 crc kubenswrapper[4803]: I0127 21:48:33.968187 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:33Z","lastTransitionTime":"2026-01-27T21:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.071071 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.071137 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.071155 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.071179 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.071198 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.217185 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.217286 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.217316 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.217361 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.217382 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.285964 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:27:53.387798922 +0000 UTC Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.319746 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.319791 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.319801 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.319814 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.319824 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.422781 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.422840 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.422871 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.422891 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.422902 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.524662 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.524698 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.524708 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.524721 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.524731 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.546176 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.546204 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.546213 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.546226 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.546235 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: E0127 21:48:34.559088 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:34Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.563801 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.563832 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.563840 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.563871 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.563881 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: E0127 21:48:34.580739 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:34Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.584435 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.584479 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.584491 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.584507 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.584517 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: E0127 21:48:34.595797 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:34Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.599585 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.599642 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.599661 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.599679 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.599694 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: E0127 21:48:34.610702 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:34Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.613741 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.613779 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.613790 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.613808 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.613820 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: E0127 21:48:34.626270 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:34Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:34 crc kubenswrapper[4803]: E0127 21:48:34.626379 4803 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.627983 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.628055 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.628067 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.628116 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.628129 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.730257 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.730303 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.730316 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.730332 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.730343 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.821131 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:34 crc kubenswrapper[4803]: E0127 21:48:34.821286 4803 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:34 crc kubenswrapper[4803]: E0127 21:48:34.821360 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs podName:0d757da7-4079-4a7a-806d-560834fe95ae nodeName:}" failed. No retries permitted until 2026-01-27 21:49:06.821341259 +0000 UTC m=+99.237362958 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs") pod "network-metrics-daemon-72wq6" (UID: "0d757da7-4079-4a7a-806d-560834fe95ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.833267 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.833402 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.833509 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.833608 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.833704 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.937856 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.937913 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.937923 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.937938 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:34 crc kubenswrapper[4803]: I0127 21:48:34.937947 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:34Z","lastTransitionTime":"2026-01-27T21:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.041074 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.041132 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.041147 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.041168 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.041183 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:35Z","lastTransitionTime":"2026-01-27T21:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.143606 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.143662 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.143674 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.143691 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.143702 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:35Z","lastTransitionTime":"2026-01-27T21:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.246866 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.246907 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.246916 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.246932 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.246942 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:35Z","lastTransitionTime":"2026-01-27T21:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.286615 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:30:31.702755824 +0000 UTC Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.305941 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.305979 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.306012 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.305981 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:35 crc kubenswrapper[4803]: E0127 21:48:35.306093 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:35 crc kubenswrapper[4803]: E0127 21:48:35.306229 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:35 crc kubenswrapper[4803]: E0127 21:48:35.306314 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:35 crc kubenswrapper[4803]: E0127 21:48:35.306386 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.348710 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.348750 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.348760 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.348776 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.348786 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:35Z","lastTransitionTime":"2026-01-27T21:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.451296 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.451342 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.451356 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.451378 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.451390 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:35Z","lastTransitionTime":"2026-01-27T21:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.554261 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.554308 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.554320 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.554338 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.554353 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:35Z","lastTransitionTime":"2026-01-27T21:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.657694 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.657735 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.657745 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.657758 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.657769 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:35Z","lastTransitionTime":"2026-01-27T21:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.760081 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.760123 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.760141 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.760164 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.760180 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:35Z","lastTransitionTime":"2026-01-27T21:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.863419 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.863462 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.863471 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.863483 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.863493 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:35Z","lastTransitionTime":"2026-01-27T21:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.966491 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.966551 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.966567 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.966592 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:35 crc kubenswrapper[4803]: I0127 21:48:35.966610 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:35Z","lastTransitionTime":"2026-01-27T21:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.069480 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.069577 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.069589 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.069606 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.069621 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:36Z","lastTransitionTime":"2026-01-27T21:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.172624 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.172712 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.172733 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.172766 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.172788 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:36Z","lastTransitionTime":"2026-01-27T21:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.276150 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.276553 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.276700 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.276828 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.277001 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:36Z","lastTransitionTime":"2026-01-27T21:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.287391 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:25:48.711018317 +0000 UTC Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.379184 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.379484 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.379586 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.379687 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.379773 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:36Z","lastTransitionTime":"2026-01-27T21:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.482342 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.482388 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.482397 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.482412 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.482422 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:36Z","lastTransitionTime":"2026-01-27T21:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.584956 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.584998 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.585009 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.585023 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.585033 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:36Z","lastTransitionTime":"2026-01-27T21:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.686831 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.686890 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.686902 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.686917 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.686926 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:36Z","lastTransitionTime":"2026-01-27T21:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.739909 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qnns7_2a912f01-6d26-421f-8b21-fb2f98d5c2e6/kube-multus/0.log" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.739973 4803 generic.go:334] "Generic (PLEG): container finished" podID="2a912f01-6d26-421f-8b21-fb2f98d5c2e6" containerID="693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5" exitCode=1 Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.740011 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qnns7" event={"ID":"2a912f01-6d26-421f-8b21-fb2f98d5c2e6","Type":"ContainerDied","Data":"693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5"} Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.740423 4803 scope.go:117] "RemoveContainer" containerID="693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.768300 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.795488 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.795530 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.795557 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.795582 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.795595 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:36Z","lastTransitionTime":"2026-01-27T21:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.813603 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150112 6467 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150147 6467 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kube\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.836030 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.853815 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.866291 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.904175 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.904660 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.904740 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.904797 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.904898 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.904966 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:36Z","lastTransitionTime":"2026-01-27T21:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.916503 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.929359 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.945434 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.963236 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:36Z\\\",\\\"message\\\":\\\"2026-01-27T21:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9\\\\n2026-01-27T21:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9 to /host/opt/cni/bin/\\\\n2026-01-27T21:47:51Z [verbose] multus-daemon started\\\\n2026-01-27T21:47:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T21:48:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.977172 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:36 crc kubenswrapper[4803]: I0127 21:48:36.992370 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.006421 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97826e-c50d-4cda-b3ce-56bbf0e97f6a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db7e62956ef3526e02fdb5bc208185103cfbe40b86346dc993fb956bdb15cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ffe7f19851c6226af442882ecaa7514cc38d6bd1467881cbb700190fb58cd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.007188 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.007217 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.007225 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.007238 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.007247 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:37Z","lastTransitionTime":"2026-01-27T21:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.020447 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.032394 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.042916 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.054449 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.109072 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.109330 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.109414 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.109479 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.109534 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:37Z","lastTransitionTime":"2026-01-27T21:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.212085 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.212216 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.212278 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.212338 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.212416 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:37Z","lastTransitionTime":"2026-01-27T21:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.287797 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:15:48.725034145 +0000 UTC Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.306135 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.306188 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:37 crc kubenswrapper[4803]: E0127 21:48:37.306275 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.306328 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:37 crc kubenswrapper[4803]: E0127 21:48:37.306500 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.306540 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:37 crc kubenswrapper[4803]: E0127 21:48:37.306618 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:37 crc kubenswrapper[4803]: E0127 21:48:37.306661 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.314275 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.314321 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.314335 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.314354 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.314370 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:37Z","lastTransitionTime":"2026-01-27T21:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.417341 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.417423 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.417439 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.417471 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.417491 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:37Z","lastTransitionTime":"2026-01-27T21:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.520496 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.520579 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.520594 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.520622 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.520642 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:37Z","lastTransitionTime":"2026-01-27T21:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.623220 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.623343 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.623359 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.623376 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.623388 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:37Z","lastTransitionTime":"2026-01-27T21:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.725841 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.725952 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.725977 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.726004 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.726021 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:37Z","lastTransitionTime":"2026-01-27T21:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.745508 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qnns7_2a912f01-6d26-421f-8b21-fb2f98d5c2e6/kube-multus/0.log" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.745588 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qnns7" event={"ID":"2a912f01-6d26-421f-8b21-fb2f98d5c2e6","Type":"ContainerStarted","Data":"59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66"} Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.771324 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.808275 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150112 6467 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150147 6467 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kube\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.828320 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.828396 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.828414 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.828441 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.828458 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:37Z","lastTransitionTime":"2026-01-27T21:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.831361 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.846109 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.864514 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.880091 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.895938 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.912709 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.931256 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.931314 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.931327 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.931348 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.931361 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:37Z","lastTransitionTime":"2026-01-27T21:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.938038 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:36Z\\\",\\\"message\\\":\\\"2026-01-27T21:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9\\\\n2026-01-27T21:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9 to /host/opt/cni/bin/\\\\n2026-01-27T21:47:51Z [verbose] multus-daemon started\\\\n2026-01-27T21:47:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T21:48:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.954772 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.972722 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:37 crc kubenswrapper[4803]: I0127 21:48:37.990093 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.007944 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.026011 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.033671 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.033699 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.033707 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.033723 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.033732 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:38Z","lastTransitionTime":"2026-01-27T21:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.041571 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.062786 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.080087 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97826e-c50d-4cda-b3ce-56bbf0e97f6a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db7e62956ef3526e02fdb5bc208185103cfbe40b86346dc993fb956bdb15cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ffe7f19851c6226af442882ecaa7514cc38d6bd1467881cbb700190fb58cd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.136095 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.136157 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.136175 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.136199 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.136216 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:38Z","lastTransitionTime":"2026-01-27T21:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.238495 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.238919 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.239017 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.239180 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.239286 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:38Z","lastTransitionTime":"2026-01-27T21:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.289532 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:12:09.518608471 +0000 UTC Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.321430 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.341977 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.342906 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.342940 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.342951 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.342968 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.342979 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:38Z","lastTransitionTime":"2026-01-27T21:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.359474 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:36Z\\\",\\\"message\\\":\\\"2026-01-27T21:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9\\\\n2026-01-27T21:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9 to /host/opt/cni/bin/\\\\n2026-01-27T21:47:51Z [verbose] multus-daemon started\\\\n2026-01-27T21:47:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T21:48:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.376669 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.391799 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.408526 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97826e-c50d-4cda-b3ce-56bbf0e97f6a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db7e62956ef3526e02fdb5bc208185103cfbe40b86346dc993fb956bdb15cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ffe7f19851c6226af442882ecaa7514cc38d6bd1467881cbb700190fb58cd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.426008 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.440205 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.444730 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.444760 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.444785 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.444800 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.444810 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:38Z","lastTransitionTime":"2026-01-27T21:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.450979 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.463314 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.474396 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.491388 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150112 6467 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150147 6467 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kube\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.501861 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.514714 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.524701 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.535414 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.545105 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.547010 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.547065 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.547081 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.547105 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.547120 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:38Z","lastTransitionTime":"2026-01-27T21:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.649355 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.649399 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.649409 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.649424 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.649434 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:38Z","lastTransitionTime":"2026-01-27T21:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.751770 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.752125 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.752136 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.752150 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.752159 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:38Z","lastTransitionTime":"2026-01-27T21:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.854346 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.854391 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.854401 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.854417 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.854429 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:38Z","lastTransitionTime":"2026-01-27T21:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.956562 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.956623 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.956637 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.956654 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:38 crc kubenswrapper[4803]: I0127 21:48:38.956666 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:38Z","lastTransitionTime":"2026-01-27T21:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.059527 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.059572 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.059584 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.059603 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.059615 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:39Z","lastTransitionTime":"2026-01-27T21:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.162744 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.162785 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.162799 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.162820 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.162834 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:39Z","lastTransitionTime":"2026-01-27T21:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.265606 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.265676 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.265693 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.265717 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.265735 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:39Z","lastTransitionTime":"2026-01-27T21:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.290215 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:32:59.709998158 +0000 UTC Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.306393 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.306427 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:39 crc kubenswrapper[4803]: E0127 21:48:39.306507 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.306393 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.306387 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:39 crc kubenswrapper[4803]: E0127 21:48:39.306657 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:39 crc kubenswrapper[4803]: E0127 21:48:39.306749 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:39 crc kubenswrapper[4803]: E0127 21:48:39.306837 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.374961 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.375012 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.375053 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.375072 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.375084 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:39Z","lastTransitionTime":"2026-01-27T21:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.478459 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.478506 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.478516 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.478530 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.478539 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:39Z","lastTransitionTime":"2026-01-27T21:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.580683 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.580717 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.580727 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.580745 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.580755 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:39Z","lastTransitionTime":"2026-01-27T21:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.683081 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.683118 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.683126 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.683139 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.683149 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:39Z","lastTransitionTime":"2026-01-27T21:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.785194 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.785270 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.785292 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.785318 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.785338 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:39Z","lastTransitionTime":"2026-01-27T21:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.887921 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.887990 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.888008 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.888030 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.888047 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:39Z","lastTransitionTime":"2026-01-27T21:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.990445 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.990483 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.990491 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.990503 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:39 crc kubenswrapper[4803]: I0127 21:48:39.990512 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:39Z","lastTransitionTime":"2026-01-27T21:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.095761 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.095825 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.095842 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.095898 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.095916 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:40Z","lastTransitionTime":"2026-01-27T21:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.198926 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.198966 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.198976 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.198991 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.199001 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:40Z","lastTransitionTime":"2026-01-27T21:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.291335 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:14:49.914074141 +0000 UTC Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.301591 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.301901 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.302104 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.302294 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.302615 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:40Z","lastTransitionTime":"2026-01-27T21:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.405400 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.405454 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.405468 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.405488 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.405502 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:40Z","lastTransitionTime":"2026-01-27T21:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.507867 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.508525 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.508539 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.508554 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.508569 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:40Z","lastTransitionTime":"2026-01-27T21:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.611008 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.611043 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.611053 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.611080 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.611090 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:40Z","lastTransitionTime":"2026-01-27T21:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.713408 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.713456 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.713469 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.713485 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.713495 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:40Z","lastTransitionTime":"2026-01-27T21:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.817654 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.817699 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.817708 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.817722 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.817732 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:40Z","lastTransitionTime":"2026-01-27T21:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.920340 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.920398 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.920412 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.920430 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:40 crc kubenswrapper[4803]: I0127 21:48:40.920442 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:40Z","lastTransitionTime":"2026-01-27T21:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.023149 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.023187 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.023199 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.023216 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.023226 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:41Z","lastTransitionTime":"2026-01-27T21:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.125864 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.125904 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.125913 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.125926 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.125936 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:41Z","lastTransitionTime":"2026-01-27T21:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.228730 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.228767 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.228778 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.228795 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.228806 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:41Z","lastTransitionTime":"2026-01-27T21:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.292917 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 10:19:39.51020202 +0000 UTC Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.306203 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.306247 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.306262 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.306334 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:41 crc kubenswrapper[4803]: E0127 21:48:41.306425 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:41 crc kubenswrapper[4803]: E0127 21:48:41.306584 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:41 crc kubenswrapper[4803]: E0127 21:48:41.306708 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:41 crc kubenswrapper[4803]: E0127 21:48:41.306747 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.331109 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.331162 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.331180 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.331204 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.331223 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:41Z","lastTransitionTime":"2026-01-27T21:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.433488 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.433534 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.433547 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.433565 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.433579 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:41Z","lastTransitionTime":"2026-01-27T21:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.535463 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.535496 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.535505 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.535522 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.535532 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:41Z","lastTransitionTime":"2026-01-27T21:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.638179 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.638239 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.638260 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.638289 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.638309 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:41Z","lastTransitionTime":"2026-01-27T21:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.742022 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.742059 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.742067 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.742081 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.742091 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:41Z","lastTransitionTime":"2026-01-27T21:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.844994 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.845062 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.845123 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.845149 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.845167 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:41Z","lastTransitionTime":"2026-01-27T21:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.947619 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.947652 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.947686 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.947702 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:41 crc kubenswrapper[4803]: I0127 21:48:41.947713 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:41Z","lastTransitionTime":"2026-01-27T21:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.051059 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.051143 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.051176 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.051208 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.051228 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:42Z","lastTransitionTime":"2026-01-27T21:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.153785 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.153885 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.153906 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.153928 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.153944 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:42Z","lastTransitionTime":"2026-01-27T21:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.256161 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.256200 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.256211 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.256225 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.256234 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:42Z","lastTransitionTime":"2026-01-27T21:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.293974 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:21:39.391026388 +0000 UTC Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.357910 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.357970 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.357982 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.358019 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.358032 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:42Z","lastTransitionTime":"2026-01-27T21:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.460237 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.460308 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.460320 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.460362 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.460378 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:42Z","lastTransitionTime":"2026-01-27T21:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.563033 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.563095 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.563114 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.563138 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.563152 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:42Z","lastTransitionTime":"2026-01-27T21:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.666344 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.666433 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.666462 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.666493 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.666515 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:42Z","lastTransitionTime":"2026-01-27T21:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.769346 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.769422 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.769446 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.769476 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.769498 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:42Z","lastTransitionTime":"2026-01-27T21:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.872569 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.872643 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.872668 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.872699 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.872727 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:42Z","lastTransitionTime":"2026-01-27T21:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.976083 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.976159 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.976171 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.976226 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:42 crc kubenswrapper[4803]: I0127 21:48:42.976244 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:42Z","lastTransitionTime":"2026-01-27T21:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.078164 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.078208 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.078220 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.078238 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.078253 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:43Z","lastTransitionTime":"2026-01-27T21:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.180538 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.180573 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.180584 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.180602 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.180614 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:43Z","lastTransitionTime":"2026-01-27T21:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.282839 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.282973 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.282993 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.283017 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.283034 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:43Z","lastTransitionTime":"2026-01-27T21:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.294168 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:33:37.18476462 +0000 UTC Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.306652 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.306724 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.306764 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:43 crc kubenswrapper[4803]: E0127 21:48:43.306812 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.306832 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:43 crc kubenswrapper[4803]: E0127 21:48:43.306937 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:43 crc kubenswrapper[4803]: E0127 21:48:43.307010 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:43 crc kubenswrapper[4803]: E0127 21:48:43.307135 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.386108 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.386158 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.386175 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.386200 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.386216 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:43Z","lastTransitionTime":"2026-01-27T21:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.489610 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.489642 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.489651 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.489662 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.489671 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:43Z","lastTransitionTime":"2026-01-27T21:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.592593 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.592621 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.592629 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.592642 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.592650 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:43Z","lastTransitionTime":"2026-01-27T21:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.695962 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.696020 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.696037 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.696061 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.696077 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:43Z","lastTransitionTime":"2026-01-27T21:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.802063 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.802156 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.802176 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.802206 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.802226 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:43Z","lastTransitionTime":"2026-01-27T21:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.905572 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.905646 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.905669 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.905699 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:43 crc kubenswrapper[4803]: I0127 21:48:43.905725 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:43Z","lastTransitionTime":"2026-01-27T21:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.008716 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.008793 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.008812 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.008841 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.008904 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.112812 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.112914 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.112938 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.112969 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.112990 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.215960 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.216018 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.216038 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.216066 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.216087 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.294671 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:25:03.250999975 +0000 UTC Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.317610 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.317644 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.317655 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.317669 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.317680 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.420387 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.420446 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.420465 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.420490 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.420511 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.522942 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.523000 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.523017 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.523040 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.523057 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.625104 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.625167 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.625184 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.625210 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.625229 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.688996 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.689054 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.689071 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.689094 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.689111 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: E0127 21:48:44.707091 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:44Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.711024 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.711083 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.711104 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.711128 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.711145 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: E0127 21:48:44.730116 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:44Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.734529 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.734623 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.734642 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.734667 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.734719 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: E0127 21:48:44.756652 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:44Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.761837 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.761932 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.761959 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.761991 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.762016 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: E0127 21:48:44.785071 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:44Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.790558 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.790600 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.790619 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.790648 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.790672 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: E0127 21:48:44.808020 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:44Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:44 crc kubenswrapper[4803]: E0127 21:48:44.808603 4803 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.811285 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.811344 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.811364 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.811392 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.811418 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.914750 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.914808 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.914820 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.914866 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:44 crc kubenswrapper[4803]: I0127 21:48:44.914880 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:44Z","lastTransitionTime":"2026-01-27T21:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.017965 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.018015 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.018032 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.018055 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.018071 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:45Z","lastTransitionTime":"2026-01-27T21:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.121061 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.121155 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.121174 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.121200 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.121217 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:45Z","lastTransitionTime":"2026-01-27T21:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.224164 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.224253 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.224277 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.224308 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.224331 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:45Z","lastTransitionTime":"2026-01-27T21:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.295923 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:05:05.913159197 +0000 UTC Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.306357 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.306369 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.306422 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.306583 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:45 crc kubenswrapper[4803]: E0127 21:48:45.306583 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:45 crc kubenswrapper[4803]: E0127 21:48:45.306821 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:45 crc kubenswrapper[4803]: E0127 21:48:45.307379 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:45 crc kubenswrapper[4803]: E0127 21:48:45.307661 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.307836 4803 scope.go:117] "RemoveContainer" containerID="6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.326481 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.326540 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.326556 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.326576 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.326617 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:45Z","lastTransitionTime":"2026-01-27T21:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.429674 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.429749 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.429765 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.430209 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.430268 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:45Z","lastTransitionTime":"2026-01-27T21:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.533004 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.533057 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.533076 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.533099 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.533117 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:45Z","lastTransitionTime":"2026-01-27T21:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.635930 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.635979 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.635991 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.636011 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.636026 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:45Z","lastTransitionTime":"2026-01-27T21:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.738492 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.738537 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.738549 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.738566 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.738578 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:45Z","lastTransitionTime":"2026-01-27T21:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.774024 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/2.log" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.776938 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.777737 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.788999 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.808898 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.826527 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.840771 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.840810 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.840933 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.840947 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.840966 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:45Z","lastTransitionTime":"2026-01-27T21:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.841329 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:36Z\\\",\\\"message\\\":\\\"2026-01-27T21:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9\\\\n2026-01-27T21:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9 to /host/opt/cni/bin/\\\\n2026-01-27T21:47:51Z [verbose] multus-daemon started\\\\n2026-01-27T21:47:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T21:48:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.852481 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.866064 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.875006 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97826e-c50d-4cda-b3ce-56bbf0e97f6a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db7e62956ef3526e02fdb5bc208185103cfbe40b86346dc993fb956bdb15cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ffe7f19851c6226af442882ecaa7514cc38d6bd1467881cbb700190fb58cd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.884541 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.898176 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.908445 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.922107 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.939944 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150112 6467 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150147 6467 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kube\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.943474 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.943516 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.943527 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.943542 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.943551 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:45Z","lastTransitionTime":"2026-01-27T21:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.952963 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.963512 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.975051 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.987032 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:45 crc kubenswrapper[4803]: I0127 21:48:45.996439 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:45Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.046156 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.046191 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.046202 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.046218 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.046229 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:46Z","lastTransitionTime":"2026-01-27T21:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.148063 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.148097 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.148105 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.148119 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.148128 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:46Z","lastTransitionTime":"2026-01-27T21:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.255445 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.255516 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.255534 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.255559 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.255585 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:46Z","lastTransitionTime":"2026-01-27T21:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.296540 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:05:26.777675334 +0000 UTC Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.357322 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.357354 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.357365 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.357381 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.357390 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:46Z","lastTransitionTime":"2026-01-27T21:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.460264 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.460334 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.460351 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.460374 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.460391 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:46Z","lastTransitionTime":"2026-01-27T21:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.562498 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.562532 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.562545 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.562561 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.562571 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:46Z","lastTransitionTime":"2026-01-27T21:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.665412 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.665476 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.665502 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.665534 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.665557 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:46Z","lastTransitionTime":"2026-01-27T21:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.768024 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.768089 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.768106 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.768129 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.768148 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:46Z","lastTransitionTime":"2026-01-27T21:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.782168 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/3.log" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.783005 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/2.log" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.786242 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f" exitCode=1 Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.786291 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f"} Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.786344 4803 scope.go:117] "RemoveContainer" containerID="6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.787655 4803 scope.go:117] "RemoveContainer" containerID="0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f" Jan 27 21:48:46 crc kubenswrapper[4803]: E0127 21:48:46.788021 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.812721 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.828937 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.845029 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.871164 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.871216 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.871234 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.871257 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.871274 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:46Z","lastTransitionTime":"2026-01-27T21:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.873335 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.891458 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.911183 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.934565 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.953697 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.969927 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.974037 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.974103 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.974122 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.974146 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.974164 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:46Z","lastTransitionTime":"2026-01-27T21:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:46 crc kubenswrapper[4803]: I0127 21:48:46.992120 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:36Z\\\",\\\"message\\\":\\\"2026-01-27T21:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9\\\\n2026-01-27T21:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9 to /host/opt/cni/bin/\\\\n2026-01-27T21:47:51Z [verbose] multus-daemon started\\\\n2026-01-27T21:47:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T21:48:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.008886 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.025926 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.045273 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97826e-c50d-4cda-b3ce-56bbf0e97f6a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db7e62956ef3526e02fdb5bc208185103cfbe40b86346dc993fb956bdb15cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ffe7f19851c6226af442882ecaa7514cc38d6bd1467881cbb700190fb58cd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.064313 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.077019 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.077088 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.077108 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.077134 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.077153 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:47Z","lastTransitionTime":"2026-01-27T21:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.086079 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.108186 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.137292 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0b410d78b1035265f66aa147c479da5dd6bdbeb8cf68e79eaf3209862af81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:14Z\\\",\\\"message\\\":\\\"external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150112 6467 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 21:48:14.150147 6467 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kube\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:46Z\\\",\\\"message\\\":\\\"k-metrics-daemon-72wq6\\\\nI0127 21:48:46.172087 6864 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-72wq6 in node crc\\\\nI0127 21:48:46.172089 6864 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0127 21:48:46.172095 6864 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0127 21:48:46.172100 6864 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nF0127 21:48:46.172099 6864 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.179591 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.179657 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.179675 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.179702 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.179724 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:47Z","lastTransitionTime":"2026-01-27T21:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.282157 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.282230 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.282250 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.282274 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.282292 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:47Z","lastTransitionTime":"2026-01-27T21:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.296715 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:32:41.671898657 +0000 UTC Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.307631 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:47 crc kubenswrapper[4803]: E0127 21:48:47.307964 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.308277 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:47 crc kubenswrapper[4803]: E0127 21:48:47.308367 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.308542 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:47 crc kubenswrapper[4803]: E0127 21:48:47.308692 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.309026 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:47 crc kubenswrapper[4803]: E0127 21:48:47.309108 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.384184 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.384239 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.384255 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.384277 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.384293 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:47Z","lastTransitionTime":"2026-01-27T21:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.486560 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.486606 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.486617 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.486634 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.486646 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:47Z","lastTransitionTime":"2026-01-27T21:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.589402 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.589455 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.589467 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.589484 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.589498 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:47Z","lastTransitionTime":"2026-01-27T21:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.692658 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.692707 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.692720 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.692738 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.692751 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:47Z","lastTransitionTime":"2026-01-27T21:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.791280 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/3.log" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.794870 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.794899 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.794908 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.794940 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.794951 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:47Z","lastTransitionTime":"2026-01-27T21:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.795659 4803 scope.go:117] "RemoveContainer" containerID="0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f" Jan 27 21:48:47 crc kubenswrapper[4803]: E0127 21:48:47.795787 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.813628 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.824741 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.835529 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.850066 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:36Z\\\",\\\"message\\\":\\\"2026-01-27T21:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9\\\\n2026-01-27T21:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9 to /host/opt/cni/bin/\\\\n2026-01-27T21:47:51Z [verbose] multus-daemon started\\\\n2026-01-27T21:47:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T21:48:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.862303 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.876102 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.888750 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97826e-c50d-4cda-b3ce-56bbf0e97f6a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db7e62956ef3526e02fdb5bc208185103cfbe40b86346dc993fb956bdb15cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ffe7f19851c6226af442882ecaa7514cc38d6bd1467881cbb700190fb58cd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.897361 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.897399 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.897411 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.897427 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.897439 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:47Z","lastTransitionTime":"2026-01-27T21:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.902813 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.917034 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.932809 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.944154 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.965428 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:46Z\\\",\\\"message\\\":\\\"k-metrics-daemon-72wq6\\\\nI0127 21:48:46.172087 6864 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-72wq6 in node crc\\\\nI0127 21:48:46.172089 6864 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0127 21:48:46.172095 6864 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0127 21:48:46.172100 6864 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nF0127 21:48:46.172099 6864 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.980039 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.991941 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:47Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.999207 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.999243 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.999254 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.999270 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:47 crc kubenswrapper[4803]: I0127 21:48:47.999281 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:47Z","lastTransitionTime":"2026-01-27T21:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.002542 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.015481 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.023798 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.101817 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.101870 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.101900 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.101914 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.101924 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:48Z","lastTransitionTime":"2026-01-27T21:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.205316 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.205395 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.205430 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.205460 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.205482 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:48Z","lastTransitionTime":"2026-01-27T21:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.297406 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:13:48.339526894 +0000 UTC Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.307741 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.307782 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.307795 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.307810 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.307822 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:48Z","lastTransitionTime":"2026-01-27T21:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.319267 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.331230 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.342210 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.355550 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.364588 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.376573 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.388735 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.401117 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.409829 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.409881 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.409893 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.409906 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.409915 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:48Z","lastTransitionTime":"2026-01-27T21:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.413394 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:36Z\\\",\\\"message\\\":\\\"2026-01-27T21:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9\\\\n2026-01-27T21:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9 to /host/opt/cni/bin/\\\\n2026-01-27T21:47:51Z [verbose] multus-daemon started\\\\n2026-01-27T21:47:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T21:48:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.431367 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.445139 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.454730 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97826e-c50d-4cda-b3ce-56bbf0e97f6a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db7e62956ef3526e02fdb5bc208185103cfbe40b86346dc993fb956bdb15cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ffe7f19851c6226af442882ecaa7514cc38d6bd1467881cbb700190fb58cd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.466874 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.479939 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.489871 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.500235 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.512366 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.512430 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.512442 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.512457 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.512467 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:48Z","lastTransitionTime":"2026-01-27T21:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.515607 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:46Z\\\",\\\"message\\\":\\\"k-metrics-daemon-72wq6\\\\nI0127 21:48:46.172087 6864 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-72wq6 in node crc\\\\nI0127 21:48:46.172089 6864 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0127 21:48:46.172095 6864 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0127 21:48:46.172100 6864 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nF0127 21:48:46.172099 6864 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:48Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.616091 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.616122 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.616129 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.616142 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.616151 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:48Z","lastTransitionTime":"2026-01-27T21:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.718766 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.718821 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.718836 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.718875 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.718884 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:48Z","lastTransitionTime":"2026-01-27T21:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.822344 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.822404 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.822419 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.822442 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.822457 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:48Z","lastTransitionTime":"2026-01-27T21:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.924651 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.924706 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.924723 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.924746 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:48 crc kubenswrapper[4803]: I0127 21:48:48.924764 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:48Z","lastTransitionTime":"2026-01-27T21:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.027730 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.027764 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.027771 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.027783 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.027792 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:49Z","lastTransitionTime":"2026-01-27T21:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.130403 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.130511 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.130531 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.130555 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.130571 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:49Z","lastTransitionTime":"2026-01-27T21:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.233369 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.233429 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.233445 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.233468 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.233486 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:49Z","lastTransitionTime":"2026-01-27T21:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.297832 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:35:16.060351755 +0000 UTC Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.306155 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.306179 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.306191 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:49 crc kubenswrapper[4803]: E0127 21:48:49.306679 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:49 crc kubenswrapper[4803]: E0127 21:48:49.306521 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.306259 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:49 crc kubenswrapper[4803]: E0127 21:48:49.306788 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:49 crc kubenswrapper[4803]: E0127 21:48:49.307079 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.336678 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.337106 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.337312 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.337517 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.337756 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:49Z","lastTransitionTime":"2026-01-27T21:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.441073 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.441125 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.441143 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.441165 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.441181 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:49Z","lastTransitionTime":"2026-01-27T21:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.544359 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.544424 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.544450 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.544483 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.544508 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:49Z","lastTransitionTime":"2026-01-27T21:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.647187 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.647262 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.647286 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.647319 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.647342 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:49Z","lastTransitionTime":"2026-01-27T21:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.750762 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.750913 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.750934 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.750967 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.750987 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:49Z","lastTransitionTime":"2026-01-27T21:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.854186 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.854303 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.854313 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.854327 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.854337 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:49Z","lastTransitionTime":"2026-01-27T21:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.957589 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.957669 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.957690 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.957716 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:49 crc kubenswrapper[4803]: I0127 21:48:49.957732 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:49Z","lastTransitionTime":"2026-01-27T21:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.060717 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.060781 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.060799 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.060826 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.060886 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:50Z","lastTransitionTime":"2026-01-27T21:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.164085 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.164148 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.164165 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.164190 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.164210 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:50Z","lastTransitionTime":"2026-01-27T21:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.267530 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.267586 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.267603 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.267627 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.267644 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:50Z","lastTransitionTime":"2026-01-27T21:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.298279 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:06:18.165972496 +0000 UTC Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.370492 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.370554 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.370572 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.370599 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.370623 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:50Z","lastTransitionTime":"2026-01-27T21:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.473689 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.473787 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.473806 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.473890 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.473908 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:50Z","lastTransitionTime":"2026-01-27T21:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.576840 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.576942 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.576959 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.576983 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.577003 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:50Z","lastTransitionTime":"2026-01-27T21:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.680518 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.680588 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.680605 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.680630 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.680647 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:50Z","lastTransitionTime":"2026-01-27T21:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.784082 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.784132 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.784147 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.784170 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.784184 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:50Z","lastTransitionTime":"2026-01-27T21:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.887650 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.887704 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.887721 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.887744 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.887761 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:50Z","lastTransitionTime":"2026-01-27T21:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.990348 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.990385 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.990397 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.990412 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:50 crc kubenswrapper[4803]: I0127 21:48:50.990423 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:50Z","lastTransitionTime":"2026-01-27T21:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.089833 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.090036 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.090074 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.090116 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.090080314 +0000 UTC m=+147.506102063 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.090168 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.090206 4803 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.090213 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.090363 4803 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.090403 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.090437 4803 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.090271 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.090254768 +0000 UTC m=+147.506276487 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.090491 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.090469464 +0000 UTC m=+147.506491203 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.090533 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.090505315 +0000 UTC m=+147.506527054 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.092881 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.092908 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.092918 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.092934 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.092945 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:51Z","lastTransitionTime":"2026-01-27T21:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.191649 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.191974 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.192028 4803 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.192045 4803 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.192116 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.19209719 +0000 UTC m=+147.608118899 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.195030 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.195061 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.195073 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.195091 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.195103 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:51Z","lastTransitionTime":"2026-01-27T21:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.297937 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.298042 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.298096 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.298124 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.298174 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:51Z","lastTransitionTime":"2026-01-27T21:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.298460 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:05:44.812621242 +0000 UTC Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.305958 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.306006 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.306061 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.306158 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.306389 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.306439 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.306670 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:51 crc kubenswrapper[4803]: E0127 21:48:51.306744 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.400943 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.401048 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.401069 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.401093 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.401110 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:51Z","lastTransitionTime":"2026-01-27T21:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.504242 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.504305 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.504323 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.504348 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.504366 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:51Z","lastTransitionTime":"2026-01-27T21:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.607044 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.607136 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.607159 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.607184 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.607201 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:51Z","lastTransitionTime":"2026-01-27T21:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.710573 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.710693 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.710745 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.710772 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.710794 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:51Z","lastTransitionTime":"2026-01-27T21:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.812773 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.812888 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.812908 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.812937 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.812959 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:51Z","lastTransitionTime":"2026-01-27T21:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.916242 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.916375 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.916394 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.916417 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:51 crc kubenswrapper[4803]: I0127 21:48:51.916433 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:51Z","lastTransitionTime":"2026-01-27T21:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.019822 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.019924 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.019942 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.019966 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.019983 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:52Z","lastTransitionTime":"2026-01-27T21:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.122452 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.122581 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.122601 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.122628 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.122644 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:52Z","lastTransitionTime":"2026-01-27T21:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.226045 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.226112 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.226131 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.226154 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.226171 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:52Z","lastTransitionTime":"2026-01-27T21:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.298698 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 10:16:11.838796566 +0000 UTC Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.329679 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.330170 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.330188 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.330215 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.330233 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:52Z","lastTransitionTime":"2026-01-27T21:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.432609 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.432670 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.432687 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.432712 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.432744 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:52Z","lastTransitionTime":"2026-01-27T21:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.535645 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.535707 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.535718 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.535738 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.535751 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:52Z","lastTransitionTime":"2026-01-27T21:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.638019 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.638091 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.638110 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.638134 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.638152 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:52Z","lastTransitionTime":"2026-01-27T21:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.740569 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.740627 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.740643 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.740667 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.740684 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:52Z","lastTransitionTime":"2026-01-27T21:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.843387 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.843421 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.843432 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.843446 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.843461 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:52Z","lastTransitionTime":"2026-01-27T21:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.946215 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.946276 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.946293 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.946318 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:52 crc kubenswrapper[4803]: I0127 21:48:52.946335 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:52Z","lastTransitionTime":"2026-01-27T21:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.048967 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.049025 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.049041 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.049066 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.049083 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:53Z","lastTransitionTime":"2026-01-27T21:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.152868 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.152934 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.152952 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.152973 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.153004 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:53Z","lastTransitionTime":"2026-01-27T21:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.256405 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.256491 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.256515 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.256543 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.256561 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:53Z","lastTransitionTime":"2026-01-27T21:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.299870 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 19:47:05.589103621 +0000 UTC Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.306242 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.306325 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.306346 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:53 crc kubenswrapper[4803]: E0127 21:48:53.306415 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.306242 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:53 crc kubenswrapper[4803]: E0127 21:48:53.306642 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:53 crc kubenswrapper[4803]: E0127 21:48:53.306780 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:53 crc kubenswrapper[4803]: E0127 21:48:53.306911 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.360287 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.360359 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.360380 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.360405 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.360422 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:53Z","lastTransitionTime":"2026-01-27T21:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.463772 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.463836 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.463892 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.463924 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.463947 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:53Z","lastTransitionTime":"2026-01-27T21:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.567437 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.567512 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.567537 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.567563 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.567579 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:53Z","lastTransitionTime":"2026-01-27T21:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.670016 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.670077 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.670093 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.670115 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.670130 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:53Z","lastTransitionTime":"2026-01-27T21:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.773167 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.773232 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.773274 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.773300 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.773318 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:53Z","lastTransitionTime":"2026-01-27T21:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.876156 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.876248 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.876266 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.876289 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.876307 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:53Z","lastTransitionTime":"2026-01-27T21:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.979038 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.979153 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.979177 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.979235 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:53 crc kubenswrapper[4803]: I0127 21:48:53.979256 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:53Z","lastTransitionTime":"2026-01-27T21:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.081775 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.081874 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.081892 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.081915 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.081931 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:54Z","lastTransitionTime":"2026-01-27T21:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.185454 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.185511 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.185528 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.185552 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.185570 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:54Z","lastTransitionTime":"2026-01-27T21:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.288527 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.288613 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.288621 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.288632 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.288641 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:54Z","lastTransitionTime":"2026-01-27T21:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.300921 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:58:43.650172433 +0000 UTC Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.391556 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.391613 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.391632 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.391656 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.391674 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:54Z","lastTransitionTime":"2026-01-27T21:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.494501 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.494558 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.494585 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.494605 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.494616 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:54Z","lastTransitionTime":"2026-01-27T21:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.597276 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.597328 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.597346 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.597371 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.597387 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:54Z","lastTransitionTime":"2026-01-27T21:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.700433 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.700483 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.700498 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.700515 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.700526 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:54Z","lastTransitionTime":"2026-01-27T21:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.802875 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.802927 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.802937 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.802948 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.802956 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:54Z","lastTransitionTime":"2026-01-27T21:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.905307 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.905368 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.905385 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.905411 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.905432 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:54Z","lastTransitionTime":"2026-01-27T21:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.971811 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.971909 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.971929 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.971953 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.971970 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:54Z","lastTransitionTime":"2026-01-27T21:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:54 crc kubenswrapper[4803]: E0127 21:48:54.991312 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:54Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.995409 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.995465 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.995483 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.995507 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:54 crc kubenswrapper[4803]: I0127 21:48:54.995526 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:54Z","lastTransitionTime":"2026-01-27T21:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: E0127 21:48:55.008940 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:55Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.013613 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.013673 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.013692 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.013717 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.013735 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: E0127 21:48:55.032160 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:55Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.036609 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.036660 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.036681 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.036709 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.036730 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: E0127 21:48:55.051764 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:55Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.056336 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.056385 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.056407 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.056436 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.056459 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: E0127 21:48:55.073114 4803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9610eea-40df-4e3a-82a8-03c1d35078a8\\\",\\\"systemUUID\\\":\\\"676ec8ff-b158-409e-ada7-33047b2b95b9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:55Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:55 crc kubenswrapper[4803]: E0127 21:48:55.073338 4803 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.078369 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.078493 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.078574 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.078608 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.078644 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.182385 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.182441 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.182458 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.182481 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.182500 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.286311 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.286738 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.286942 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.287096 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.287292 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.301674 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 04:20:53.06701375 +0000 UTC Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.306083 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.306104 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.306162 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:55 crc kubenswrapper[4803]: E0127 21:48:55.306328 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.306379 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:55 crc kubenswrapper[4803]: E0127 21:48:55.306503 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:55 crc kubenswrapper[4803]: E0127 21:48:55.306877 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:55 crc kubenswrapper[4803]: E0127 21:48:55.307145 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.390412 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.390478 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.390496 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.390521 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.390540 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.493750 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.493799 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.493815 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.493837 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.493892 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.597380 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.597770 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.598049 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.598215 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.598349 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.701507 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.701898 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.702035 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.702179 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.702325 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.805386 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.805428 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.805441 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.805459 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.805473 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.908971 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.909385 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.909614 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.909884 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:55 crc kubenswrapper[4803]: I0127 21:48:55.910185 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:55Z","lastTransitionTime":"2026-01-27T21:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.012670 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.012887 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.012958 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.013089 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.013169 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:56Z","lastTransitionTime":"2026-01-27T21:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.116414 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.116474 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.116491 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.116517 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.116539 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:56Z","lastTransitionTime":"2026-01-27T21:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.219313 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.219658 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.219789 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.219937 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.220067 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:56Z","lastTransitionTime":"2026-01-27T21:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.302461 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:56:59.289324717 +0000 UTC Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.322766 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.323065 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.323254 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.323640 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.323968 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:56Z","lastTransitionTime":"2026-01-27T21:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.427274 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.427339 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.427352 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.427368 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.427420 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:56Z","lastTransitionTime":"2026-01-27T21:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.530004 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.530321 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.530450 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.530565 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.530663 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:56Z","lastTransitionTime":"2026-01-27T21:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.633581 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.633878 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.633995 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.634177 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.634219 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:56Z","lastTransitionTime":"2026-01-27T21:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.736886 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.736929 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.736946 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.736971 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.736989 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:56Z","lastTransitionTime":"2026-01-27T21:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.839451 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.839525 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.839570 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.839605 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.839632 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:56Z","lastTransitionTime":"2026-01-27T21:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.942670 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.942721 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.942740 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.942764 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:56 crc kubenswrapper[4803]: I0127 21:48:56.942779 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:56Z","lastTransitionTime":"2026-01-27T21:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.046227 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.046313 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.046333 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.046357 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.046374 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:57Z","lastTransitionTime":"2026-01-27T21:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.149446 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.149499 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.149510 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.149525 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.149534 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:57Z","lastTransitionTime":"2026-01-27T21:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.252545 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.252652 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.252709 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.252740 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.252759 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:57Z","lastTransitionTime":"2026-01-27T21:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.303575 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:44:23.573302648 +0000 UTC Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.306038 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.306272 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.306389 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:57 crc kubenswrapper[4803]: E0127 21:48:57.306541 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.306620 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:57 crc kubenswrapper[4803]: E0127 21:48:57.306721 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:57 crc kubenswrapper[4803]: E0127 21:48:57.306802 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:57 crc kubenswrapper[4803]: E0127 21:48:57.306923 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.355192 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.355486 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.355620 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.355729 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.355820 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:57Z","lastTransitionTime":"2026-01-27T21:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.459042 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.459090 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.459106 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.459128 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.459143 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:57Z","lastTransitionTime":"2026-01-27T21:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.561533 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.561577 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.561589 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.561609 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.561623 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:57Z","lastTransitionTime":"2026-01-27T21:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.664373 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.664445 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.664469 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.664499 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.664521 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:57Z","lastTransitionTime":"2026-01-27T21:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.773087 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.773527 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.774339 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.774531 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.774745 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:57Z","lastTransitionTime":"2026-01-27T21:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.878415 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.878455 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.878471 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.878487 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.878499 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:57Z","lastTransitionTime":"2026-01-27T21:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.985472 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.985529 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.985559 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.985584 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:57 crc kubenswrapper[4803]: I0127 21:48:57.985604 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:57Z","lastTransitionTime":"2026-01-27T21:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.088932 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.088970 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.088996 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.089014 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.089025 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:58Z","lastTransitionTime":"2026-01-27T21:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.191352 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.191398 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.191410 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.191427 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.191439 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:58Z","lastTransitionTime":"2026-01-27T21:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.294648 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.294692 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.294704 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.294722 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.294733 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:58Z","lastTransitionTime":"2026-01-27T21:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.304028 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:01:07.381878064 +0000 UTC Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.323403 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.341221 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qnns7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a912f01-6d26-421f-8b21-fb2f98d5c2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:36Z\\\",\\\"message\\\":\\\"2026-01-27T21:47:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9\\\\n2026-01-27T21:47:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_31df60fb-3cfe-4bf2-8b81-dc28804487f9 to /host/opt/cni/bin/\\\\n2026-01-27T21:47:51Z [verbose] multus-daemon started\\\\n2026-01-27T21:47:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T21:48:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47kbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qnns7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.356582 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c089f04-d9e7-4bca-b221-dfaf322e1ea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422ad13f9065ca33c288738f67edca53a8d784317b076d8787f824496111163a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2cce56c62618941207f5b47069f5371635912ab067ead7acfd0e155f66d091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nsfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kvp7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.372711 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2368a79f-8b27-4530-b237-fb1a38194eda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc883abdf024e1c0791ef359e7029f514f7fba782913a2a43f145b23fc2008f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a578776d2cc68f2c87d5b6875b270b5588f9318c5907979e2d75d0a460539411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8682ed8bf608247ec3b73f4a1471efabfc91611fcfc6bacce1180487236eaa2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.389758 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fdcd1070c6e4cd3b6738e085244ac24d3a2bc6b5e84667ddb90e4f8f0bdc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.397636 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.397665 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.397676 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.397693 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.397705 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:58Z","lastTransitionTime":"2026-01-27T21:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.405696 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.420759 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f6eb756a12d5afb4b9a8490bdad649e5b98110acdb362fa4553502e1194fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203683a30b27f4e06af63382f93843bab89b7bbb70bd27da2df56cdc98f3a4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.432425 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-72wq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d757da7-4079-4a7a-806d-560834fe95ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:48:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-72wq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.454606 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"cure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 21:47:47.064857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 21:47:47.064861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 21:47:47.065195 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 21:47:47.070251 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2640766399/tls.crt::/tmp/serving-cert-2640766399/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769550451\\\\\\\\\\\\\\\" (2026-01-27 21:47:30 +0000 UTC to 2026-02-26 21:47:31 +0000 UTC (now=2026-01-27 21:47:47.070222404 +0000 UTC))\\\\\\\"\\\\nI0127 21:47:47.070309 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 21:47:47.070370 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070409 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 21:47:47.070414 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 21:47:47.070423 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 21:47:47.070515 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 21:47:47.070530 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0127 21:47:47.070590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.468766 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97826e-c50d-4cda-b3ce-56bbf0e97f6a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db7e62956ef3526e02fdb5bc208185103cfbe40b86346dc993fb956bdb15cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ffe7f19851c6226af442882ecaa7514cc38d6bd1467881cbb700190fb58cd04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4afc180ce4e6e28b1d403c7316b4a58f7541be72c26615061bb69e45a9f684aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.484273 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8853597d1af5e56a9dfe8cd327757bd84a8ea06a149737ea0966001956ee65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.499893 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.500234 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.500299 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.500375 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.500451 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:58Z","lastTransitionTime":"2026-01-27T21:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.517209 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db438ee2-57c2-4cbf-9d4b-96f8587647d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T21:48:46Z\\\",\\\"message\\\":\\\"k-metrics-daemon-72wq6\\\\nI0127 21:48:46.172087 6864 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-72wq6 in node crc\\\\nI0127 21:48:46.172089 6864 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0127 21:48:46.172095 6864 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0127 21:48:46.172100 6864 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nF0127 21:48:46.172099 6864 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:46Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T21:48:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xnhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6dhj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.540475 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m87bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e37235-ed32-42bc-b5b0-49278fed9593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a094f45924d8fba082a91bcbd7a7a48bc7f74e63812f2cbfa8d8751397e2fd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e76c7e7f58814d56106659a2b24cb0543ef9cef94b30c52cb80e97fad098d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c1f2126b354c1630f6aed8ffb4b79cead5bfbc985d27a3cf0486c0a7ce5896c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a954d30d7b634b867b858257f07fb4da495631479aca3bceed10bed9a73558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608404728d484fd83051104b522edad1a62a13f265a627dc5f159d3832156244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc0e82d71ef9421c3617774d33e4a9b79dc29fce91ae66a4f559d587d9efab12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f80f83f0372a0d7f335fddb7766d518e8e9cdc51bdb535232b2759d4dd4ad8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T21:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T21:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk2r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m87bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.556523 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flq97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4970974-561c-402f-9b67-aa8c43445762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df437a16642976f1d6b1784def02a4ac0c6a308f82984a5d928e777ebae4a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7mcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flq97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.576682 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.590754 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwmq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dba4d19-a8ee-4103-94e5-b1e0b352df62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afc624a4f294e78c59e254641f9cc46cb1b164839dc53f149a608b122f3a6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4shf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwmq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.602881 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb23e3d-ee70-4f1d-85c0-005373cca336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T21:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d80f7605cdb79b23a8baaf97c76fbadd0794f9eb00fe2d67eb08ff18c9a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:47:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flmnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T21:47:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d56gp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T21:48:58Z is after 2025-08-24T17:21:41Z" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.603203 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.603225 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.603235 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.603248 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.603259 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:58Z","lastTransitionTime":"2026-01-27T21:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.705170 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.705241 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.705253 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.705266 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.705276 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:58Z","lastTransitionTime":"2026-01-27T21:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.806448 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.806478 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.806486 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.806497 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.806506 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:58Z","lastTransitionTime":"2026-01-27T21:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.908477 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.908532 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.908548 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.908568 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:58 crc kubenswrapper[4803]: I0127 21:48:58.908580 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:58Z","lastTransitionTime":"2026-01-27T21:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.011230 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.011281 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.011298 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.011320 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.011337 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:59Z","lastTransitionTime":"2026-01-27T21:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.114994 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.115049 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.115065 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.115093 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.115115 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:59Z","lastTransitionTime":"2026-01-27T21:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.217515 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.217555 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.217565 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.217579 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.217591 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:59Z","lastTransitionTime":"2026-01-27T21:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.305036 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 10:46:23.910003242 +0000 UTC Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.306254 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.306438 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:48:59 crc kubenswrapper[4803]: E0127 21:48:59.306549 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.306389 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.306422 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:48:59 crc kubenswrapper[4803]: E0127 21:48:59.308162 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:48:59 crc kubenswrapper[4803]: E0127 21:48:59.308105 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:48:59 crc kubenswrapper[4803]: E0127 21:48:59.308398 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.319251 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.319282 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.319290 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.319318 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.319330 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:59Z","lastTransitionTime":"2026-01-27T21:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.421560 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.421769 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.421881 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.421959 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.422022 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:59Z","lastTransitionTime":"2026-01-27T21:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.525481 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.525528 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.525542 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.525557 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.525567 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:59Z","lastTransitionTime":"2026-01-27T21:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.629177 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.629260 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.629279 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.629336 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.629359 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:59Z","lastTransitionTime":"2026-01-27T21:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.733162 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.733231 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.733248 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.733278 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.733299 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:59Z","lastTransitionTime":"2026-01-27T21:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.835687 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.835743 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.835760 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.835783 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.835801 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:59Z","lastTransitionTime":"2026-01-27T21:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.938593 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.938665 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.938689 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.938717 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:48:59 crc kubenswrapper[4803]: I0127 21:48:59.938739 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:48:59Z","lastTransitionTime":"2026-01-27T21:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.040919 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.040948 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.040957 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.040970 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.040978 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:00Z","lastTransitionTime":"2026-01-27T21:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.144050 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.144080 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.144089 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.144101 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.144110 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:00Z","lastTransitionTime":"2026-01-27T21:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.246089 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.246125 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.246142 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.246162 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.246176 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:00Z","lastTransitionTime":"2026-01-27T21:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.305158 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:29:05.729205535 +0000 UTC Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.348381 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.348426 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.348437 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.348455 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.348469 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:00Z","lastTransitionTime":"2026-01-27T21:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.451340 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.451407 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.451430 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.451459 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.451479 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:00Z","lastTransitionTime":"2026-01-27T21:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.554790 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.554824 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.554836 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.554882 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.554895 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:00Z","lastTransitionTime":"2026-01-27T21:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.657270 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.657345 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.657368 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.657387 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.657400 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:00Z","lastTransitionTime":"2026-01-27T21:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.760343 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.760372 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.760382 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.760396 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.760404 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:00Z","lastTransitionTime":"2026-01-27T21:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.862218 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.862506 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.862734 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.863013 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.863106 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:00Z","lastTransitionTime":"2026-01-27T21:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.966060 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.966098 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.966118 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.966131 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:00 crc kubenswrapper[4803]: I0127 21:49:00.966140 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:00Z","lastTransitionTime":"2026-01-27T21:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.074257 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.074306 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.074319 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.074337 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.074349 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:01Z","lastTransitionTime":"2026-01-27T21:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.176431 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.176468 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.176477 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.176493 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.176502 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:01Z","lastTransitionTime":"2026-01-27T21:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.278977 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.279011 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.279047 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.279066 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.279074 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:01Z","lastTransitionTime":"2026-01-27T21:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.306973 4803 scope.go:117] "RemoveContainer" containerID="0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.307030 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:28:58.64877598 +0000 UTC Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.307066 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:01 crc kubenswrapper[4803]: E0127 21:49:01.307521 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.307128 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.307086 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:01 crc kubenswrapper[4803]: E0127 21:49:01.307598 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.307154 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:01 crc kubenswrapper[4803]: E0127 21:49:01.307664 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:01 crc kubenswrapper[4803]: E0127 21:49:01.307713 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:01 crc kubenswrapper[4803]: E0127 21:49:01.308139 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.381496 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.381531 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.381539 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.381552 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.381560 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:01Z","lastTransitionTime":"2026-01-27T21:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.484391 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.484692 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.484792 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.484914 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.484998 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:01Z","lastTransitionTime":"2026-01-27T21:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.587672 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.588031 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.588207 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.588374 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.588523 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:01Z","lastTransitionTime":"2026-01-27T21:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.691326 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.691395 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.691419 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.691450 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.691472 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:01Z","lastTransitionTime":"2026-01-27T21:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.796341 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.796423 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.796451 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.796477 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.796499 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:01Z","lastTransitionTime":"2026-01-27T21:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.899293 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.899646 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.899728 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.899800 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:01 crc kubenswrapper[4803]: I0127 21:49:01.899899 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:01Z","lastTransitionTime":"2026-01-27T21:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.003359 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.003748 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.003995 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.004399 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.004580 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:02Z","lastTransitionTime":"2026-01-27T21:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.108075 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.108126 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.108139 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.108156 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.108168 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:02Z","lastTransitionTime":"2026-01-27T21:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.210712 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.210778 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.210794 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.210822 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.210838 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:02Z","lastTransitionTime":"2026-01-27T21:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.307795 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:53:54.206065811 +0000 UTC Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.312807 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.312890 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.312909 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.312934 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.312953 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:02Z","lastTransitionTime":"2026-01-27T21:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.318244 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.415657 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.415696 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.415704 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.415722 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.415733 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:02Z","lastTransitionTime":"2026-01-27T21:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.518412 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.518470 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.518487 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.518511 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.518530 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:02Z","lastTransitionTime":"2026-01-27T21:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.622080 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.622136 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.622153 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.622177 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.622193 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:02Z","lastTransitionTime":"2026-01-27T21:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.725261 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.725305 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.725317 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.725332 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.725343 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:02Z","lastTransitionTime":"2026-01-27T21:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.828549 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.828600 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.828611 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.828627 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.828636 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:02Z","lastTransitionTime":"2026-01-27T21:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.930273 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.930329 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.930346 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.930371 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:02 crc kubenswrapper[4803]: I0127 21:49:02.930389 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:02Z","lastTransitionTime":"2026-01-27T21:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.033768 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.033882 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.033906 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.033939 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.033959 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:03Z","lastTransitionTime":"2026-01-27T21:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.136470 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.136537 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.136552 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.136570 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.136581 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:03Z","lastTransitionTime":"2026-01-27T21:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.239297 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.239684 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.239939 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.240153 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.240348 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:03Z","lastTransitionTime":"2026-01-27T21:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.306267 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:03 crc kubenswrapper[4803]: E0127 21:49:03.306456 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.306611 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.306613 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:03 crc kubenswrapper[4803]: E0127 21:49:03.306783 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.307157 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:03 crc kubenswrapper[4803]: E0127 21:49:03.307048 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:03 crc kubenswrapper[4803]: E0127 21:49:03.307635 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.308027 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:38:08.926872467 +0000 UTC Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.343374 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.343424 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.343435 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.343451 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.343465 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:03Z","lastTransitionTime":"2026-01-27T21:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.446239 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.446278 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.446289 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.446304 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.446315 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:03Z","lastTransitionTime":"2026-01-27T21:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.548689 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.548724 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.548735 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.548749 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.548760 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:03Z","lastTransitionTime":"2026-01-27T21:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.651788 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.651832 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.651876 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.651899 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.651913 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:03Z","lastTransitionTime":"2026-01-27T21:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.755131 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.755195 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.755209 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.755258 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.755273 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:03Z","lastTransitionTime":"2026-01-27T21:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.857572 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.857631 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.857647 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.857672 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.857689 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:03Z","lastTransitionTime":"2026-01-27T21:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.960776 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.960828 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.960843 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.960885 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:03 crc kubenswrapper[4803]: I0127 21:49:03.960901 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:03Z","lastTransitionTime":"2026-01-27T21:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.064732 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.064809 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.064827 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.064914 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.064942 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:04Z","lastTransitionTime":"2026-01-27T21:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.167510 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.167990 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.168009 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.168035 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.168053 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:04Z","lastTransitionTime":"2026-01-27T21:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.270339 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.270377 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.270405 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.270421 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.270434 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:04Z","lastTransitionTime":"2026-01-27T21:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.309157 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:03:03.434434922 +0000 UTC Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.373064 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.373121 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.373138 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.373160 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.373177 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:04Z","lastTransitionTime":"2026-01-27T21:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.477286 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.477389 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.477406 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.477432 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.477450 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:04Z","lastTransitionTime":"2026-01-27T21:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.580385 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.580447 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.580463 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.580487 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.580507 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:04Z","lastTransitionTime":"2026-01-27T21:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.683773 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.683907 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.683930 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.683964 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.683985 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:04Z","lastTransitionTime":"2026-01-27T21:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.787255 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.787328 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.787346 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.787372 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.787391 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:04Z","lastTransitionTime":"2026-01-27T21:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.891301 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.891358 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.891370 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.891391 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.891412 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:04Z","lastTransitionTime":"2026-01-27T21:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.995260 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.995335 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.995354 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.995384 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:04 crc kubenswrapper[4803]: I0127 21:49:04.995404 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:04Z","lastTransitionTime":"2026-01-27T21:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.099569 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.100397 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.100608 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.100993 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.101393 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:05Z","lastTransitionTime":"2026-01-27T21:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.205065 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.205283 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.205309 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.205334 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.205356 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:05Z","lastTransitionTime":"2026-01-27T21:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.241702 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.241796 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.241814 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.241839 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.241892 4803 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T21:49:05Z","lastTransitionTime":"2026-01-27T21:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.306888 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.306933 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:05 crc kubenswrapper[4803]: E0127 21:49:05.307115 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.307385 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:05 crc kubenswrapper[4803]: E0127 21:49:05.307565 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:05 crc kubenswrapper[4803]: E0127 21:49:05.307927 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.307454 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:05 crc kubenswrapper[4803]: E0127 21:49:05.308742 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.309512 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:38:17.807309513 +0000 UTC Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.310649 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.310601 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb"] Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.311525 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.316391 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.316568 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.316658 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.316773 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.322765 4803 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.435701 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-flq97" podStartSLOduration=77.435672709 podStartE2EDuration="1m17.435672709s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:05.416681622 +0000 UTC m=+97.832703331" watchObservedRunningTime="2026-01-27 21:49:05.435672709 +0000 UTC m=+97.851694448" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.455805 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ba59abcf-889b-4810-a6b1-018f71a1577a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.455906 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba59abcf-889b-4810-a6b1-018f71a1577a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.456117 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba59abcf-889b-4810-a6b1-018f71a1577a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.456199 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ba59abcf-889b-4810-a6b1-018f71a1577a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.456258 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba59abcf-889b-4810-a6b1-018f71a1577a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.472972 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-gwmq2" podStartSLOduration=77.472940725 podStartE2EDuration="1m17.472940725s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:05.455928291 +0000 UTC m=+97.871950070" watchObservedRunningTime="2026-01-27 21:49:05.472940725 +0000 UTC m=+97.888962464" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.473258 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podStartSLOduration=77.473249434 podStartE2EDuration="1m17.473249434s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:05.472513514 +0000 UTC m=+97.888535223" watchObservedRunningTime="2026-01-27 21:49:05.473249434 +0000 UTC m=+97.889271173" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.495355 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-m87bw" podStartSLOduration=77.495343494 podStartE2EDuration="1m17.495343494s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:05.494312676 +0000 UTC m=+97.910334405" watchObservedRunningTime="2026-01-27 21:49:05.495343494 +0000 UTC m=+97.911365233" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.520011 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qnns7" podStartSLOduration=77.519990723 podStartE2EDuration="1m17.519990723s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:05.519736385 +0000 UTC m=+97.935758104" watchObservedRunningTime="2026-01-27 21:49:05.519990723 +0000 UTC m=+97.936012432" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.540574 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kvp7m" podStartSLOduration=77.540546832 podStartE2EDuration="1m17.540546832s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:05.539446522 +0000 UTC m=+97.955468231" watchObservedRunningTime="2026-01-27 21:49:05.540546832 +0000 UTC m=+97.956568561" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.558018 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ba59abcf-889b-4810-a6b1-018f71a1577a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.558177 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba59abcf-889b-4810-a6b1-018f71a1577a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.558233 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ba59abcf-889b-4810-a6b1-018f71a1577a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.558285 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba59abcf-889b-4810-a6b1-018f71a1577a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.558323 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ba59abcf-889b-4810-a6b1-018f71a1577a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.558369 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba59abcf-889b-4810-a6b1-018f71a1577a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.558540 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ba59abcf-889b-4810-a6b1-018f71a1577a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.559971 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba59abcf-889b-4810-a6b1-018f71a1577a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.564839 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba59abcf-889b-4810-a6b1-018f71a1577a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.565613 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.565594442 podStartE2EDuration="1m18.565594442s" podCreationTimestamp="2026-01-27 21:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:05.563963438 +0000 UTC m=+97.979985177" watchObservedRunningTime="2026-01-27 21:49:05.565594442 +0000 UTC m=+97.981616181" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.576079 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba59abcf-889b-4810-a6b1-018f71a1577a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pf2cb\" (UID: \"ba59abcf-889b-4810-a6b1-018f71a1577a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.634663 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.700141 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=3.700118498 podStartE2EDuration="3.700118498s" podCreationTimestamp="2026-01-27 21:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:05.700093717 +0000 UTC m=+98.116115416" watchObservedRunningTime="2026-01-27 21:49:05.700118498 +0000 UTC m=+98.116140217" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.731807 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.731788444 podStartE2EDuration="1m18.731788444s" podCreationTimestamp="2026-01-27 21:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:05.7181644 +0000 UTC m=+98.134186129" watchObservedRunningTime="2026-01-27 21:49:05.731788444 +0000 UTC m=+98.147810143" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.732442 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=47.732437281 podStartE2EDuration="47.732437281s" podCreationTimestamp="2026-01-27 21:48:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:05.730467139 +0000 UTC m=+98.146488848" watchObservedRunningTime="2026-01-27 21:49:05.732437281 +0000 UTC m=+98.148458980" Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.859195 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" event={"ID":"ba59abcf-889b-4810-a6b1-018f71a1577a","Type":"ContainerStarted","Data":"8e98ca25081703d96bb3420aa314cf75033437607e4cab7aac7e697c36d8aeef"} Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.859432 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" event={"ID":"ba59abcf-889b-4810-a6b1-018f71a1577a","Type":"ContainerStarted","Data":"60f84cae3b1181928c9c4e7595980b793663ab507e19cf19a3144afbf0e74ec8"} Jan 27 21:49:05 crc kubenswrapper[4803]: I0127 21:49:05.875373 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pf2cb" podStartSLOduration=77.875355181 podStartE2EDuration="1m17.875355181s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:05.874692344 +0000 UTC m=+98.290714043" watchObservedRunningTime="2026-01-27 21:49:05.875355181 +0000 UTC m=+98.291376890" Jan 27 21:49:06 crc kubenswrapper[4803]: I0127 21:49:06.870899 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:06 crc kubenswrapper[4803]: E0127 21:49:06.871100 4803 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:49:06 crc kubenswrapper[4803]: E0127 21:49:06.871195 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs podName:0d757da7-4079-4a7a-806d-560834fe95ae nodeName:}" failed. No retries permitted until 2026-01-27 21:50:10.871166239 +0000 UTC m=+163.287187978 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs") pod "network-metrics-daemon-72wq6" (UID: "0d757da7-4079-4a7a-806d-560834fe95ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 21:49:07 crc kubenswrapper[4803]: I0127 21:49:07.305778 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:07 crc kubenswrapper[4803]: I0127 21:49:07.306314 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:07 crc kubenswrapper[4803]: I0127 21:49:07.306346 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:07 crc kubenswrapper[4803]: I0127 21:49:07.306483 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:07 crc kubenswrapper[4803]: E0127 21:49:07.306476 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:07 crc kubenswrapper[4803]: E0127 21:49:07.306641 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:07 crc kubenswrapper[4803]: E0127 21:49:07.306800 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:07 crc kubenswrapper[4803]: E0127 21:49:07.307051 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:09 crc kubenswrapper[4803]: I0127 21:49:09.306671 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:09 crc kubenswrapper[4803]: I0127 21:49:09.306773 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:09 crc kubenswrapper[4803]: E0127 21:49:09.306836 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:09 crc kubenswrapper[4803]: E0127 21:49:09.306988 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:09 crc kubenswrapper[4803]: I0127 21:49:09.307093 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:09 crc kubenswrapper[4803]: E0127 21:49:09.307210 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:09 crc kubenswrapper[4803]: I0127 21:49:09.307260 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:09 crc kubenswrapper[4803]: E0127 21:49:09.307310 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:11 crc kubenswrapper[4803]: I0127 21:49:11.306238 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:11 crc kubenswrapper[4803]: I0127 21:49:11.306323 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:11 crc kubenswrapper[4803]: I0127 21:49:11.306246 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:11 crc kubenswrapper[4803]: E0127 21:49:11.306751 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:11 crc kubenswrapper[4803]: E0127 21:49:11.307042 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:11 crc kubenswrapper[4803]: E0127 21:49:11.307240 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:11 crc kubenswrapper[4803]: I0127 21:49:11.306271 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:11 crc kubenswrapper[4803]: E0127 21:49:11.307927 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:12 crc kubenswrapper[4803]: I0127 21:49:12.326834 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 21:49:13 crc kubenswrapper[4803]: I0127 21:49:13.306391 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:13 crc kubenswrapper[4803]: I0127 21:49:13.306421 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:13 crc kubenswrapper[4803]: I0127 21:49:13.306399 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:13 crc kubenswrapper[4803]: E0127 21:49:13.306554 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:13 crc kubenswrapper[4803]: I0127 21:49:13.306471 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:13 crc kubenswrapper[4803]: E0127 21:49:13.306781 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:13 crc kubenswrapper[4803]: E0127 21:49:13.307023 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:13 crc kubenswrapper[4803]: E0127 21:49:13.307151 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:14 crc kubenswrapper[4803]: I0127 21:49:14.307480 4803 scope.go:117] "RemoveContainer" containerID="0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f" Jan 27 21:49:14 crc kubenswrapper[4803]: E0127 21:49:14.308025 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6dhj4_openshift-ovn-kubernetes(db438ee2-57c2-4cbf-9d4b-96f8587647d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" Jan 27 21:49:15 crc kubenswrapper[4803]: I0127 21:49:15.306386 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:15 crc kubenswrapper[4803]: E0127 21:49:15.306597 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:15 crc kubenswrapper[4803]: I0127 21:49:15.306953 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:15 crc kubenswrapper[4803]: E0127 21:49:15.307078 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:15 crc kubenswrapper[4803]: I0127 21:49:15.307285 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:15 crc kubenswrapper[4803]: E0127 21:49:15.307395 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:15 crc kubenswrapper[4803]: I0127 21:49:15.307568 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:15 crc kubenswrapper[4803]: E0127 21:49:15.307651 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:17 crc kubenswrapper[4803]: I0127 21:49:17.306256 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:17 crc kubenswrapper[4803]: I0127 21:49:17.306291 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:17 crc kubenswrapper[4803]: I0127 21:49:17.306315 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:17 crc kubenswrapper[4803]: I0127 21:49:17.306383 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:17 crc kubenswrapper[4803]: E0127 21:49:17.306427 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:17 crc kubenswrapper[4803]: E0127 21:49:17.306531 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:17 crc kubenswrapper[4803]: E0127 21:49:17.306622 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:17 crc kubenswrapper[4803]: E0127 21:49:17.306714 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:18 crc kubenswrapper[4803]: I0127 21:49:18.354319 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=6.354256229 podStartE2EDuration="6.354256229s" podCreationTimestamp="2026-01-27 21:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:18.350603371 +0000 UTC m=+110.766625160" watchObservedRunningTime="2026-01-27 21:49:18.354256229 +0000 UTC m=+110.770277988" Jan 27 21:49:19 crc kubenswrapper[4803]: I0127 21:49:19.306234 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:19 crc kubenswrapper[4803]: E0127 21:49:19.306405 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:19 crc kubenswrapper[4803]: I0127 21:49:19.306599 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:19 crc kubenswrapper[4803]: I0127 21:49:19.306714 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:19 crc kubenswrapper[4803]: I0127 21:49:19.306581 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:19 crc kubenswrapper[4803]: E0127 21:49:19.306781 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:19 crc kubenswrapper[4803]: E0127 21:49:19.306916 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:19 crc kubenswrapper[4803]: E0127 21:49:19.307028 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:21 crc kubenswrapper[4803]: I0127 21:49:21.306583 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:21 crc kubenswrapper[4803]: E0127 21:49:21.306718 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:21 crc kubenswrapper[4803]: I0127 21:49:21.306938 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:21 crc kubenswrapper[4803]: E0127 21:49:21.306984 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:21 crc kubenswrapper[4803]: I0127 21:49:21.307093 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:21 crc kubenswrapper[4803]: I0127 21:49:21.307191 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:21 crc kubenswrapper[4803]: E0127 21:49:21.307233 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:21 crc kubenswrapper[4803]: E0127 21:49:21.307395 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:22 crc kubenswrapper[4803]: I0127 21:49:22.963388 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qnns7_2a912f01-6d26-421f-8b21-fb2f98d5c2e6/kube-multus/1.log" Jan 27 21:49:22 crc kubenswrapper[4803]: I0127 21:49:22.963963 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qnns7_2a912f01-6d26-421f-8b21-fb2f98d5c2e6/kube-multus/0.log" Jan 27 21:49:22 crc kubenswrapper[4803]: I0127 21:49:22.964012 4803 generic.go:334] "Generic (PLEG): container finished" podID="2a912f01-6d26-421f-8b21-fb2f98d5c2e6" containerID="59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66" exitCode=1 Jan 27 21:49:22 crc kubenswrapper[4803]: I0127 21:49:22.964041 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qnns7" event={"ID":"2a912f01-6d26-421f-8b21-fb2f98d5c2e6","Type":"ContainerDied","Data":"59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66"} Jan 27 21:49:22 crc kubenswrapper[4803]: I0127 21:49:22.964078 4803 scope.go:117] "RemoveContainer" containerID="693e80e3624007dc58cd5ff03f876e61146f2b47ef205786b739e82b7d8a37e5" Jan 27 21:49:22 crc kubenswrapper[4803]: I0127 21:49:22.964599 4803 scope.go:117] "RemoveContainer" containerID="59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66" Jan 27 21:49:22 crc kubenswrapper[4803]: E0127 21:49:22.964790 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-qnns7_openshift-multus(2a912f01-6d26-421f-8b21-fb2f98d5c2e6)\"" pod="openshift-multus/multus-qnns7" podUID="2a912f01-6d26-421f-8b21-fb2f98d5c2e6" Jan 27 21:49:23 crc kubenswrapper[4803]: I0127 21:49:23.306007 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:23 crc kubenswrapper[4803]: E0127 21:49:23.306130 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:23 crc kubenswrapper[4803]: I0127 21:49:23.306181 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:23 crc kubenswrapper[4803]: I0127 21:49:23.306224 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:23 crc kubenswrapper[4803]: I0127 21:49:23.306260 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:23 crc kubenswrapper[4803]: E0127 21:49:23.306350 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:23 crc kubenswrapper[4803]: E0127 21:49:23.306560 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:23 crc kubenswrapper[4803]: E0127 21:49:23.306669 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:23 crc kubenswrapper[4803]: I0127 21:49:23.968342 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qnns7_2a912f01-6d26-421f-8b21-fb2f98d5c2e6/kube-multus/1.log" Jan 27 21:49:25 crc kubenswrapper[4803]: I0127 21:49:25.306167 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:25 crc kubenswrapper[4803]: I0127 21:49:25.306230 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:25 crc kubenswrapper[4803]: E0127 21:49:25.306293 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:25 crc kubenswrapper[4803]: I0127 21:49:25.306335 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:25 crc kubenswrapper[4803]: I0127 21:49:25.306375 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:25 crc kubenswrapper[4803]: E0127 21:49:25.306560 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:25 crc kubenswrapper[4803]: E0127 21:49:25.306707 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:25 crc kubenswrapper[4803]: E0127 21:49:25.306803 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:26 crc kubenswrapper[4803]: I0127 21:49:26.307285 4803 scope.go:117] "RemoveContainer" containerID="0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f" Jan 27 21:49:26 crc kubenswrapper[4803]: I0127 21:49:26.979944 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/3.log" Jan 27 21:49:26 crc kubenswrapper[4803]: I0127 21:49:26.982815 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerStarted","Data":"95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7"} Jan 27 21:49:26 crc kubenswrapper[4803]: I0127 21:49:26.983386 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:49:27 crc kubenswrapper[4803]: I0127 21:49:27.025805 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podStartSLOduration=99.025787725 podStartE2EDuration="1m39.025787725s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:27.025569949 +0000 UTC m=+119.441591648" watchObservedRunningTime="2026-01-27 21:49:27.025787725 +0000 UTC m=+119.441809444" Jan 27 21:49:27 crc kubenswrapper[4803]: I0127 21:49:27.112814 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-72wq6"] Jan 27 21:49:27 crc kubenswrapper[4803]: I0127 21:49:27.112965 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:27 crc kubenswrapper[4803]: E0127 21:49:27.113051 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:27 crc kubenswrapper[4803]: I0127 21:49:27.306079 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:27 crc kubenswrapper[4803]: I0127 21:49:27.306117 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:27 crc kubenswrapper[4803]: I0127 21:49:27.306168 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:27 crc kubenswrapper[4803]: E0127 21:49:27.306301 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:27 crc kubenswrapper[4803]: E0127 21:49:27.306370 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:27 crc kubenswrapper[4803]: E0127 21:49:27.306663 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:28 crc kubenswrapper[4803]: E0127 21:49:28.254213 4803 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 21:49:28 crc kubenswrapper[4803]: I0127 21:49:28.306766 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:28 crc kubenswrapper[4803]: E0127 21:49:28.308890 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:28 crc kubenswrapper[4803]: E0127 21:49:28.419030 4803 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 21:49:29 crc kubenswrapper[4803]: I0127 21:49:29.306453 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:29 crc kubenswrapper[4803]: E0127 21:49:29.306611 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:29 crc kubenswrapper[4803]: I0127 21:49:29.306926 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:29 crc kubenswrapper[4803]: E0127 21:49:29.307021 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:29 crc kubenswrapper[4803]: I0127 21:49:29.307064 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:29 crc kubenswrapper[4803]: E0127 21:49:29.307194 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:30 crc kubenswrapper[4803]: I0127 21:49:30.306127 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:30 crc kubenswrapper[4803]: E0127 21:49:30.306314 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:31 crc kubenswrapper[4803]: I0127 21:49:31.305963 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:31 crc kubenswrapper[4803]: I0127 21:49:31.306047 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:31 crc kubenswrapper[4803]: E0127 21:49:31.307120 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:31 crc kubenswrapper[4803]: E0127 21:49:31.307253 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:31 crc kubenswrapper[4803]: I0127 21:49:31.306050 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:31 crc kubenswrapper[4803]: E0127 21:49:31.307373 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:32 crc kubenswrapper[4803]: I0127 21:49:32.306638 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:32 crc kubenswrapper[4803]: E0127 21:49:32.307132 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:33 crc kubenswrapper[4803]: I0127 21:49:33.306160 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:33 crc kubenswrapper[4803]: I0127 21:49:33.306234 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:33 crc kubenswrapper[4803]: E0127 21:49:33.306295 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:33 crc kubenswrapper[4803]: I0127 21:49:33.306313 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:33 crc kubenswrapper[4803]: E0127 21:49:33.306453 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:33 crc kubenswrapper[4803]: E0127 21:49:33.306584 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:33 crc kubenswrapper[4803]: E0127 21:49:33.421012 4803 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 21:49:34 crc kubenswrapper[4803]: I0127 21:49:34.306380 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:34 crc kubenswrapper[4803]: E0127 21:49:34.306566 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:35 crc kubenswrapper[4803]: I0127 21:49:35.305998 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:35 crc kubenswrapper[4803]: I0127 21:49:35.306007 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:35 crc kubenswrapper[4803]: E0127 21:49:35.306279 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:35 crc kubenswrapper[4803]: E0127 21:49:35.306461 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:35 crc kubenswrapper[4803]: I0127 21:49:35.306685 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:35 crc kubenswrapper[4803]: E0127 21:49:35.306764 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:36 crc kubenswrapper[4803]: I0127 21:49:36.306731 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:36 crc kubenswrapper[4803]: E0127 21:49:36.306899 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:36 crc kubenswrapper[4803]: I0127 21:49:36.306999 4803 scope.go:117] "RemoveContainer" containerID="59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66" Jan 27 21:49:37 crc kubenswrapper[4803]: I0127 21:49:37.020501 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qnns7_2a912f01-6d26-421f-8b21-fb2f98d5c2e6/kube-multus/1.log" Jan 27 21:49:37 crc kubenswrapper[4803]: I0127 21:49:37.020782 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qnns7" event={"ID":"2a912f01-6d26-421f-8b21-fb2f98d5c2e6","Type":"ContainerStarted","Data":"a4168203fe1e337403d6d45baececb9bddd8657d937ea27698b6e75c27ff002a"} Jan 27 21:49:37 crc kubenswrapper[4803]: I0127 21:49:37.306241 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:37 crc kubenswrapper[4803]: I0127 21:49:37.306320 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:37 crc kubenswrapper[4803]: I0127 21:49:37.306264 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:37 crc kubenswrapper[4803]: E0127 21:49:37.306388 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 21:49:37 crc kubenswrapper[4803]: E0127 21:49:37.306461 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 21:49:37 crc kubenswrapper[4803]: E0127 21:49:37.306606 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 21:49:38 crc kubenswrapper[4803]: I0127 21:49:38.306136 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:38 crc kubenswrapper[4803]: E0127 21:49:38.308182 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-72wq6" podUID="0d757da7-4079-4a7a-806d-560834fe95ae" Jan 27 21:49:39 crc kubenswrapper[4803]: I0127 21:49:39.306543 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:39 crc kubenswrapper[4803]: I0127 21:49:39.306601 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:39 crc kubenswrapper[4803]: I0127 21:49:39.306625 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:39 crc kubenswrapper[4803]: I0127 21:49:39.309125 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 21:49:39 crc kubenswrapper[4803]: I0127 21:49:39.309911 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 21:49:39 crc kubenswrapper[4803]: I0127 21:49:39.310129 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 21:49:39 crc kubenswrapper[4803]: I0127 21:49:39.310230 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 21:49:40 crc kubenswrapper[4803]: I0127 21:49:40.306182 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:49:40 crc kubenswrapper[4803]: I0127 21:49:40.308081 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 21:49:40 crc kubenswrapper[4803]: I0127 21:49:40.309521 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.169398 4803 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.211488 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7p5kq"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.213351 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.218827 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.224223 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.219406 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.225158 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.223109 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.226409 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.226424 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.226737 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.227314 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.227978 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.228071 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.228125 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.228337 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.228008 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.228479 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.228635 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.228834 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.231264 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.251486 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.252183 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.254026 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.261176 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.261562 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.261724 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.262009 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.262129 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.262236 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.263152 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kdr8w"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.263405 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-74666"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.263491 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.263509 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.264282 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.266106 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.267379 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.272413 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.272424 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.272631 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.272800 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.272821 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.272972 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273175 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273209 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c1fd2d-3326-44dc-9353-1c19a701826c-serving-cert\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273253 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/02c1fd2d-3326-44dc-9353-1c19a701826c-encryption-config\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273279 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-image-import-ca\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273296 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-audit\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273312 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg4nq\" (UniqueName: \"kubernetes.io/projected/1cc14a3f-c523-4461-a3b6-ad41ce4392db-kube-api-access-vg4nq\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273330 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/02c1fd2d-3326-44dc-9353-1c19a701826c-node-pullsecrets\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273347 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-config\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273361 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc14a3f-c523-4461-a3b6-ad41ce4392db-config\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273384 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1cc14a3f-c523-4461-a3b6-ad41ce4392db-machine-approver-tls\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273402 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-etcd-serving-ca\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273426 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z649b\" (UniqueName: \"kubernetes.io/projected/02c1fd2d-3326-44dc-9353-1c19a701826c-kube-api-access-z649b\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273451 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/02c1fd2d-3326-44dc-9353-1c19a701826c-etcd-client\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273469 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02c1fd2d-3326-44dc-9353-1c19a701826c-audit-dir\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.273483 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cc14a3f-c523-4461-a3b6-ad41ce4392db-auth-proxy-config\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.274158 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.274253 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.274339 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.274436 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.274522 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.274613 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.274701 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.274750 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.274823 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.277547 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.278232 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.282792 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.286046 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-stngg"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.287233 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.287263 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.287374 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.288436 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8lpmj"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.289100 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.289903 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7x4wr"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.290535 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.292243 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.293062 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.295921 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-h9nvv"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.296548 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.296746 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.296977 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.300271 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.300718 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.300766 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.300949 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.301014 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.301142 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.301215 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.301625 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.301773 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.301930 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.302245 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.302414 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.302449 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.302554 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.302745 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.302802 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.303354 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.303361 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.303494 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.303545 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.303567 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.303842 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.303955 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.303974 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.317024 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.317147 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.317175 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.317350 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.317492 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.317672 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.317036 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.317824 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.318131 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.319233 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.319311 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-s9tzw"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.319441 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.320386 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.320700 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.320885 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.321051 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.321315 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.322934 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.325716 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.326619 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9drvm"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.327375 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9drvm" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.328362 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.330767 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.333464 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pslb5"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.333595 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.334185 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbljw"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.334633 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.334946 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.336757 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.349652 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.350339 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.350447 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.350695 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.351291 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.353637 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.353673 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.353876 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.354003 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.354123 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.354164 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.354286 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.354498 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.354620 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.357504 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qrccx"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.358066 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.358341 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.358650 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.358726 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.358967 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.359464 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.360197 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.360396 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.360512 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.360620 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.360754 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.361059 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mgtlh"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.361531 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.363655 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.369432 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.370244 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.371301 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.371666 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.372098 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.372251 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.372534 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.373229 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374064 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bc2ab0a-3831-417d-95cd-f5e392217120-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xqpl4\" (UID: \"5bc2ab0a-3831-417d-95cd-f5e392217120\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374113 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c1fd2d-3326-44dc-9353-1c19a701826c-serving-cert\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374164 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-client-ca\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374189 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/02c1fd2d-3326-44dc-9353-1c19a701826c-encryption-config\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374210 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-image-import-ca\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374232 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a87a7bd1-74f2-4c14-a3c0-adf951393f10-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374258 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-audit\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374278 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc396037-51ea-4671-bc9d-821a5505ace9-serving-cert\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374301 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg4nq\" (UniqueName: \"kubernetes.io/projected/1cc14a3f-c523-4461-a3b6-ad41ce4392db-kube-api-access-vg4nq\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374321 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-stngg\" (UID: \"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374339 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x984s\" (UniqueName: \"kubernetes.io/projected/fc396037-51ea-4671-bc9d-821a5505ace9-kube-api-access-x984s\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374361 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2-serving-cert\") pod \"openshift-config-operator-7777fb866f-stngg\" (UID: \"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374381 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/02c1fd2d-3326-44dc-9353-1c19a701826c-node-pullsecrets\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374399 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a87a7bd1-74f2-4c14-a3c0-adf951393f10-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374422 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-config\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374445 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc14a3f-c523-4461-a3b6-ad41ce4392db-config\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374463 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04116321-6817-48ae-9107-cd7bac2addf3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pxhm8\" (UID: \"04116321-6817-48ae-9107-cd7bac2addf3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374492 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1cc14a3f-c523-4461-a3b6-ad41ce4392db-machine-approver-tls\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374515 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-etcd-serving-ca\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374543 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z649b\" (UniqueName: \"kubernetes.io/projected/02c1fd2d-3326-44dc-9353-1c19a701826c-kube-api-access-z649b\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374562 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc2ab0a-3831-417d-95cd-f5e392217120-config\") pod \"kube-controller-manager-operator-78b949d7b-xqpl4\" (UID: \"5bc2ab0a-3831-417d-95cd-f5e392217120\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374585 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04116321-6817-48ae-9107-cd7bac2addf3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pxhm8\" (UID: \"04116321-6817-48ae-9107-cd7bac2addf3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374603 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/02c1fd2d-3326-44dc-9353-1c19a701826c-etcd-client\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374632 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bc2ab0a-3831-417d-95cd-f5e392217120-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xqpl4\" (UID: \"5bc2ab0a-3831-417d-95cd-f5e392217120\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374651 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnpf\" (UniqueName: \"kubernetes.io/projected/04116321-6817-48ae-9107-cd7bac2addf3-kube-api-access-7hnpf\") pod \"openshift-apiserver-operator-796bbdcf4f-pxhm8\" (UID: \"04116321-6817-48ae-9107-cd7bac2addf3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374674 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a87a7bd1-74f2-4c14-a3c0-adf951393f10-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374696 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl88p\" (UniqueName: \"kubernetes.io/projected/bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2-kube-api-access-kl88p\") pod \"openshift-config-operator-7777fb866f-stngg\" (UID: \"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374719 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02c1fd2d-3326-44dc-9353-1c19a701826c-audit-dir\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374737 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cc14a3f-c523-4461-a3b6-ad41ce4392db-auth-proxy-config\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374761 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2kf9\" (UniqueName: \"kubernetes.io/projected/a87a7bd1-74f2-4c14-a3c0-adf951393f10-kube-api-access-g2kf9\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374781 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-config\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.374801 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.375500 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc14a3f-c523-4461-a3b6-ad41ce4392db-config\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.376227 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.377438 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/02c1fd2d-3326-44dc-9353-1c19a701826c-node-pullsecrets\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.378173 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-config\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.378215 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02c1fd2d-3326-44dc-9353-1c19a701826c-audit-dir\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.378377 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.378890 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-etcd-serving-ca\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.379168 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-image-import-ca\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.378917 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cc14a3f-c523-4461-a3b6-ad41ce4392db-auth-proxy-config\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.379230 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.379510 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.379519 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/02c1fd2d-3326-44dc-9353-1c19a701826c-audit\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.379761 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-szkgj"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.380044 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.383330 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.383509 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c1fd2d-3326-44dc-9353-1c19a701826c-serving-cert\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.385366 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1cc14a3f-c523-4461-a3b6-ad41ce4392db-machine-approver-tls\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.387567 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/02c1fd2d-3326-44dc-9353-1c19a701826c-etcd-client\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.388426 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/02c1fd2d-3326-44dc-9353-1c19a701826c-encryption-config\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.397060 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.397106 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.397119 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kdr8w"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.397131 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n7mdf"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.398435 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-74666"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.398461 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.398475 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7x4wr"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.398490 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-s9tzw"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.398506 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.398913 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.399065 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.399077 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.399443 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.400153 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.400913 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.401363 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bgfw4"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.402203 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.402824 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.403341 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.403527 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.403661 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.403893 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.404744 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.407834 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f64jt"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.413071 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.414337 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9drvm"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.419970 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jh44p"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.420738 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.423769 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbljw"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.423959 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.425874 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7p5kq"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.431278 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.433597 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.435351 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pslb5"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.435404 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.438549 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.443744 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.444976 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.446192 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.449267 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8lpmj"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.450782 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.451083 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-stngg"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.453242 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.454832 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.455345 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.456512 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.459767 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.461300 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-h9nvv"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.463241 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bgfw4"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.465222 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.466824 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qrccx"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.468473 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xhhs6"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.469127 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xhhs6" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.470136 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-npwr7"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.470705 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.471654 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.472899 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-npwr7"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.473964 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.475309 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n7mdf"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.475603 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-config\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.475761 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bc2ab0a-3831-417d-95cd-f5e392217120-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xqpl4\" (UID: \"5bc2ab0a-3831-417d-95cd-f5e392217120\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.475809 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-client-ca\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.475873 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a87a7bd1-74f2-4c14-a3c0-adf951393f10-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.475903 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc396037-51ea-4671-bc9d-821a5505ace9-serving-cert\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.475926 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x984s\" (UniqueName: \"kubernetes.io/projected/fc396037-51ea-4671-bc9d-821a5505ace9-kube-api-access-x984s\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.475959 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-stngg\" (UID: \"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.475983 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2-serving-cert\") pod \"openshift-config-operator-7777fb866f-stngg\" (UID: \"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.475987 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.476006 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a87a7bd1-74f2-4c14-a3c0-adf951393f10-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.476035 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04116321-6817-48ae-9107-cd7bac2addf3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pxhm8\" (UID: \"04116321-6817-48ae-9107-cd7bac2addf3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.476084 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc2ab0a-3831-417d-95cd-f5e392217120-config\") pod \"kube-controller-manager-operator-78b949d7b-xqpl4\" (UID: \"5bc2ab0a-3831-417d-95cd-f5e392217120\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.476128 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04116321-6817-48ae-9107-cd7bac2addf3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pxhm8\" (UID: \"04116321-6817-48ae-9107-cd7bac2addf3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.476152 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bc2ab0a-3831-417d-95cd-f5e392217120-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xqpl4\" (UID: \"5bc2ab0a-3831-417d-95cd-f5e392217120\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.476181 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnpf\" (UniqueName: \"kubernetes.io/projected/04116321-6817-48ae-9107-cd7bac2addf3-kube-api-access-7hnpf\") pod \"openshift-apiserver-operator-796bbdcf4f-pxhm8\" (UID: \"04116321-6817-48ae-9107-cd7bac2addf3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.476207 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a87a7bd1-74f2-4c14-a3c0-adf951393f10-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.476231 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl88p\" (UniqueName: \"kubernetes.io/projected/bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2-kube-api-access-kl88p\") pod \"openshift-config-operator-7777fb866f-stngg\" (UID: \"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.476254 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2kf9\" (UniqueName: \"kubernetes.io/projected/a87a7bd1-74f2-4c14-a3c0-adf951393f10-kube-api-access-g2kf9\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.477161 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc2ab0a-3831-417d-95cd-f5e392217120-config\") pod \"kube-controller-manager-operator-78b949d7b-xqpl4\" (UID: \"5bc2ab0a-3831-417d-95cd-f5e392217120\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.477253 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.477309 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.477687 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-stngg\" (UID: \"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.477925 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04116321-6817-48ae-9107-cd7bac2addf3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pxhm8\" (UID: \"04116321-6817-48ae-9107-cd7bac2addf3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.477963 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-config\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.478000 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-client-ca\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.478629 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a87a7bd1-74f2-4c14-a3c0-adf951393f10-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.478832 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.479989 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bc2ab0a-3831-417d-95cd-f5e392217120-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xqpl4\" (UID: \"5bc2ab0a-3831-417d-95cd-f5e392217120\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.480037 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f64jt"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.480122 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2-serving-cert\") pod \"openshift-config-operator-7777fb866f-stngg\" (UID: \"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.480644 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a87a7bd1-74f2-4c14-a3c0-adf951393f10-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.481103 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-szkgj"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.481342 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc396037-51ea-4671-bc9d-821a5505ace9-serving-cert\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.482237 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jh44p"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.482609 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04116321-6817-48ae-9107-cd7bac2addf3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pxhm8\" (UID: \"04116321-6817-48ae-9107-cd7bac2addf3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.483204 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-f5476"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.484048 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f5476" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.484220 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f5476"] Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.496507 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.515531 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.541985 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.555389 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.576226 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.594919 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.620681 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.634919 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.654771 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.721125 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.735789 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.755537 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.776030 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.796666 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.816160 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.836593 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.855597 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.875654 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.895113 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.915197 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.937165 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.955724 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.975648 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 21:49:46 crc kubenswrapper[4803]: I0127 21:49:46.996402 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.015734 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.036803 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.056323 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.075647 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.096187 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.116987 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.136567 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.156784 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.175559 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.197347 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.216140 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.236153 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.257109 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.276805 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.296632 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.317210 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.337480 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.356358 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.376649 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.394098 4803 request.go:700] Waited for 1.020131666s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.396516 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.416096 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.462821 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg4nq\" (UniqueName: \"kubernetes.io/projected/1cc14a3f-c523-4461-a3b6-ad41ce4392db-kube-api-access-vg4nq\") pod \"machine-approver-56656f9798-smwn2\" (UID: \"1cc14a3f-c523-4461-a3b6-ad41ce4392db\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.465829 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.476772 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.483701 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z649b\" (UniqueName: \"kubernetes.io/projected/02c1fd2d-3326-44dc-9353-1c19a701826c-kube-api-access-z649b\") pod \"apiserver-76f77b778f-7p5kq\" (UID: \"02c1fd2d-3326-44dc-9353-1c19a701826c\") " pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.494562 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.496728 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 21:49:47 crc kubenswrapper[4803]: W0127 21:49:47.499407 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cc14a3f_c523_4461_a3b6_ad41ce4392db.slice/crio-27e2f6cd568ef0459d8d655a45ffb1367d4f3592b5538b41432e006d1cc8df2d WatchSource:0}: Error finding container 27e2f6cd568ef0459d8d655a45ffb1367d4f3592b5538b41432e006d1cc8df2d: Status 404 returned error can't find the container with id 27e2f6cd568ef0459d8d655a45ffb1367d4f3592b5538b41432e006d1cc8df2d Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.515672 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.536725 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.555910 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.576110 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.596579 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.615210 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.636407 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.655902 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.677520 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.695244 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.715788 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.744496 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.746355 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.756812 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.775708 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.797057 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.816783 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.835205 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.856383 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.876246 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.895447 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.915641 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.935634 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.955651 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.976219 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.984902 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7p5kq"] Jan 27 21:49:47 crc kubenswrapper[4803]: I0127 21:49:47.995607 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.015530 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.035656 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.056465 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.066394 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" event={"ID":"1cc14a3f-c523-4461-a3b6-ad41ce4392db","Type":"ContainerStarted","Data":"137e027c865863c4d04b7c165df67fe39421e400cf7febae13988603399c72a5"} Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.066455 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" event={"ID":"1cc14a3f-c523-4461-a3b6-ad41ce4392db","Type":"ContainerStarted","Data":"87357ae6bbe0b5350dbdd7075c6c0b13005a8e0e139635c395bc2ac45bc49ed9"} Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.066474 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" event={"ID":"1cc14a3f-c523-4461-a3b6-ad41ce4392db","Type":"ContainerStarted","Data":"27e2f6cd568ef0459d8d655a45ffb1367d4f3592b5538b41432e006d1cc8df2d"} Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.068057 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" event={"ID":"02c1fd2d-3326-44dc-9353-1c19a701826c","Type":"ContainerStarted","Data":"ed16fffab0ff4457838911dcb07b280e7cd8e226cc94f65a80db5fd2f210d2c1"} Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.075608 4803 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.095911 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.116771 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.135889 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.155481 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.176999 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.197147 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.216415 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.235110 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.274569 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bc2ab0a-3831-417d-95cd-f5e392217120-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xqpl4\" (UID: \"5bc2ab0a-3831-417d-95cd-f5e392217120\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.289338 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2kf9\" (UniqueName: \"kubernetes.io/projected/a87a7bd1-74f2-4c14-a3c0-adf951393f10-kube-api-access-g2kf9\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.309662 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnpf\" (UniqueName: \"kubernetes.io/projected/04116321-6817-48ae-9107-cd7bac2addf3-kube-api-access-7hnpf\") pod \"openshift-apiserver-operator-796bbdcf4f-pxhm8\" (UID: \"04116321-6817-48ae-9107-cd7bac2addf3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.317839 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.330129 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a87a7bd1-74f2-4c14-a3c0-adf951393f10-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4nmr7\" (UID: \"a87a7bd1-74f2-4c14-a3c0-adf951393f10\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.349722 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl88p\" (UniqueName: \"kubernetes.io/projected/bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2-kube-api-access-kl88p\") pod \"openshift-config-operator-7777fb866f-stngg\" (UID: \"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.379498 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.388450 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x984s\" (UniqueName: \"kubernetes.io/projected/fc396037-51ea-4671-bc9d-821a5505ace9-kube-api-access-x984s\") pod \"route-controller-manager-6576b87f9c-lmjtq\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.394834 4803 request.go:700] Waited for 1.910506177s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.396395 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.416350 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.435871 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.483361 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4"] Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.499928 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-trusted-ca\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.499962 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e01cae6-c0f6-4f51-ba69-6a162470b81c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2vbkh\" (UID: \"8e01cae6-c0f6-4f51-ba69-6a162470b81c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.499983 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500000 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-etcd-service-ca\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500014 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500099 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500116 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a6eb50d-a8af-4e53-a129-aee15ae61037-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500135 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-trusted-ca-bundle\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500184 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhlmg\" (UniqueName: \"kubernetes.io/projected/e2308949-6865-4d3b-ad3b-1de5c42149b8-kube-api-access-fhlmg\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500211 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-dir\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500226 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500241 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2308949-6865-4d3b-ad3b-1de5c42149b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500271 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbfsf\" (UniqueName: \"kubernetes.io/projected/6e8228ba-8397-4400-b30f-07dcf24d6fb5-kube-api-access-kbfsf\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500287 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-etcd-client\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500304 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp5td\" (UniqueName: \"kubernetes.io/projected/8f8b8ad1-f276-4546-afd2-49f338f38c92-kube-api-access-lp5td\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500320 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f8b8ad1-f276-4546-afd2-49f338f38c92-serving-cert\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500344 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f8b8ad1-f276-4546-afd2-49f338f38c92-service-ca-bundle\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500369 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-tls\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500387 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500405 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500422 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8228ba-8397-4400-b30f-07dcf24d6fb5-serving-cert\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500439 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-serving-cert\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500465 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a6eb50d-a8af-4e53-a129-aee15ae61037-etcd-client\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500482 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-925bm\" (UniqueName: \"kubernetes.io/projected/7a6eb50d-a8af-4e53-a129-aee15ae61037-kube-api-access-925bm\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500500 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500516 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbsg5\" (UniqueName: \"kubernetes.io/projected/70091f5f-e06c-4cf3-8bc8-299f10207363-kube-api-access-kbsg5\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500530 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz8bz\" (UniqueName: \"kubernetes.io/projected/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-kube-api-access-qz8bz\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500545 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e62d282c-a35b-42d6-a490-e11c0239b6c3-trusted-ca\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500559 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbzfx\" (UniqueName: \"kubernetes.io/projected/e62d282c-a35b-42d6-a490-e11c0239b6c3-kube-api-access-hbzfx\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500573 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a6eb50d-a8af-4e53-a129-aee15ae61037-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500588 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500604 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500619 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f8b8ad1-f276-4546-afd2-49f338f38c92-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500635 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500650 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmvfj\" (UniqueName: \"kubernetes.io/projected/1bc7c7ba-cad8-4f64-836e-a564b254e1fd-kube-api-access-xmvfj\") pod \"downloads-7954f5f757-9drvm\" (UID: \"1bc7c7ba-cad8-4f64-836e-a564b254e1fd\") " pod="openshift-console/downloads-7954f5f757-9drvm" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500664 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-service-ca\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500685 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500700 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a6eb50d-a8af-4e53-a129-aee15ae61037-audit-policies\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500717 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8b8ad1-f276-4546-afd2-49f338f38c92-config\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500733 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm9zj\" (UniqueName: \"kubernetes.io/projected/8e01cae6-c0f6-4f51-ba69-6a162470b81c-kube-api-access-mm9zj\") pod \"cluster-samples-operator-665b6dd947-2vbkh\" (UID: \"8e01cae6-c0f6-4f51-ba69-6a162470b81c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500750 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2308949-6865-4d3b-ad3b-1de5c42149b8-images\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500777 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg8wz\" (UniqueName: \"kubernetes.io/projected/b06a9990-b5a6-4198-b3da-22eb6df6692b-kube-api-access-wg8wz\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500792 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a6eb50d-a8af-4e53-a129-aee15ae61037-audit-dir\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500807 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500823 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500837 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-config\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500885 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-bound-sa-token\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500903 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61adce3e-cfdd-4a33-b64d-f49069ef6469-trusted-ca\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500919 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-config\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500933 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-policies\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500948 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-certificates\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500963 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500979 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.500993 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2308949-6865-4d3b-ad3b-1de5c42149b8-config\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501028 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e62d282c-a35b-42d6-a490-e11c0239b6c3-metrics-tls\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501046 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-oauth-config\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501061 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-oauth-serving-cert\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501076 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkg89\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-kube-api-access-hkg89\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501091 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-client-ca\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501105 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e62d282c-a35b-42d6-a490-e11c0239b6c3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501119 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-config\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501134 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61adce3e-cfdd-4a33-b64d-f49069ef6469-config\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501148 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-serving-cert\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501162 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-etcd-ca\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501177 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a6eb50d-a8af-4e53-a129-aee15ae61037-encryption-config\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501192 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61adce3e-cfdd-4a33-b64d-f49069ef6469-serving-cert\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501207 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p5xw\" (UniqueName: \"kubernetes.io/projected/61adce3e-cfdd-4a33-b64d-f49069ef6469-kube-api-access-5p5xw\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.501222 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6eb50d-a8af-4e53-a129-aee15ae61037-serving-cert\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: E0127 21:49:48.501563 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:49.001553325 +0000 UTC m=+141.417575024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.507701 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.523192 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.570210 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.603934 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.604302 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-tls\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.604335 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.604362 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0612fd3-e6b4-43b1-8e66-d0bf17281248-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khmz4\" (UID: \"f0612fd3-e6b4-43b1-8e66-d0bf17281248\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.604380 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-apiservice-cert\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.605394 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/827ee45d-1ade-46af-95fe-ab0e673f6dc1-config-volume\") pod \"dns-default-npwr7\" (UID: \"827ee45d-1ade-46af-95fe-ab0e673f6dc1\") " pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.605494 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvvb\" (UniqueName: \"kubernetes.io/projected/056beb8e-ab30-48dc-b00e-6c261269431f-kube-api-access-8xvvb\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.605588 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a6eb50d-a8af-4e53-a129-aee15ae61037-etcd-client\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.605622 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl5cr\" (UniqueName: \"kubernetes.io/projected/511157a0-ff3f-4105-b425-81fe57ec64e0-kube-api-access-pl5cr\") pod \"migrator-59844c95c7-w264r\" (UID: \"511157a0-ff3f-4105-b425-81fe57ec64e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.605648 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0612fd3-e6b4-43b1-8e66-d0bf17281248-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khmz4\" (UID: \"f0612fd3-e6b4-43b1-8e66-d0bf17281248\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.605682 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3887e56-f659-4a2f-ac29-e6841a2245da-config\") pod \"service-ca-operator-777779d784-f64jt\" (UID: \"c3887e56-f659-4a2f-ac29-e6841a2245da\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.605815 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-plugins-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.606473 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbsg5\" (UniqueName: \"kubernetes.io/projected/70091f5f-e06c-4cf3-8bc8-299f10207363-kube-api-access-kbsg5\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.606608 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz8bz\" (UniqueName: \"kubernetes.io/projected/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-kube-api-access-qz8bz\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.606667 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jlcm\" (UniqueName: \"kubernetes.io/projected/8f440e60-e9e3-43ef-93ca-9b27adeac069-kube-api-access-4jlcm\") pod \"control-plane-machine-set-operator-78cbb6b69f-wpzf9\" (UID: \"8f440e60-e9e3-43ef-93ca-9b27adeac069\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.606719 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n7mdf\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:48 crc kubenswrapper[4803]: E0127 21:49:48.606784 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:49.106762704 +0000 UTC m=+141.522784403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.606905 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbzfx\" (UniqueName: \"kubernetes.io/projected/e62d282c-a35b-42d6-a490-e11c0239b6c3-kube-api-access-hbzfx\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.607056 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a6eb50d-a8af-4e53-a129-aee15ae61037-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.607205 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.607259 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f8b8ad1-f276-4546-afd2-49f338f38c92-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.607302 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3643de61-fe1e-4b5f-acef-ac477aa81f8a-cert\") pod \"ingress-canary-f5476\" (UID: \"3643de61-fe1e-4b5f-acef-ac477aa81f8a\") " pod="openshift-ingress-canary/ingress-canary-f5476" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.607332 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d7bade4-c73a-419d-9c33-c30b0b7260ca-proxy-tls\") pod \"machine-config-controller-84d6567774-tlnvs\" (UID: \"8d7bade4-c73a-419d-9c33-c30b0b7260ca\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.607604 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a6eb50d-a8af-4e53-a129-aee15ae61037-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.607691 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d7bade4-c73a-419d-9c33-c30b0b7260ca-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tlnvs\" (UID: \"8d7bade4-c73a-419d-9c33-c30b0b7260ca\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.607893 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n7mdf\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.607951 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmvfj\" (UniqueName: \"kubernetes.io/projected/1bc7c7ba-cad8-4f64-836e-a564b254e1fd-kube-api-access-xmvfj\") pod \"downloads-7954f5f757-9drvm\" (UID: \"1bc7c7ba-cad8-4f64-836e-a564b254e1fd\") " pod="openshift-console/downloads-7954f5f757-9drvm" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.607994 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae7575d2-5f8d-44a1-90fb-653fe276f273-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n24nl\" (UID: \"ae7575d2-5f8d-44a1-90fb-653fe276f273\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608022 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/056beb8e-ab30-48dc-b00e-6c261269431f-default-certificate\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608044 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/827ee45d-1ade-46af-95fe-ab0e673f6dc1-metrics-tls\") pod \"dns-default-npwr7\" (UID: \"827ee45d-1ade-46af-95fe-ab0e673f6dc1\") " pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608066 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9t9m\" (UniqueName: \"kubernetes.io/projected/841660c5-b152-467b-97d4-38b9a181d315-kube-api-access-c9t9m\") pod \"service-ca-9c57cc56f-bgfw4\" (UID: \"841660c5-b152-467b-97d4-38b9a181d315\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608130 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg8wz\" (UniqueName: \"kubernetes.io/projected/b06a9990-b5a6-4198-b3da-22eb6df6692b-kube-api-access-wg8wz\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608154 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608175 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1a768c9-8a8e-412a-a377-6812b5aca206-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-drp7p\" (UID: \"d1a768c9-8a8e-412a-a377-6812b5aca206\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608227 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-bound-sa-token\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608273 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-registration-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608306 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-config\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608336 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-policies\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608374 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-certificates\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608393 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608417 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k7cb\" (UniqueName: \"kubernetes.io/projected/a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7-kube-api-access-6k7cb\") pod \"dns-operator-744455d44c-qrccx\" (UID: \"a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608451 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkg89\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-kube-api-access-hkg89\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608479 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/767d334b-3f70-4847-b45a-ccf0d7e2dc2b-srv-cert\") pod \"catalog-operator-68c6474976-hmpmk\" (UID: \"767d334b-3f70-4847-b45a-ccf0d7e2dc2b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608539 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p5xw\" (UniqueName: \"kubernetes.io/projected/61adce3e-cfdd-4a33-b64d-f49069ef6469-kube-api-access-5p5xw\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608701 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f8b8ad1-f276-4546-afd2-49f338f38c92-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608820 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6eb50d-a8af-4e53-a129-aee15ae61037-serving-cert\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608874 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-trusted-ca\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608910 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e01cae6-c0f6-4f51-ba69-6a162470b81c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2vbkh\" (UID: \"8e01cae6-c0f6-4f51-ba69-6a162470b81c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608939 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5380cb77-bf7a-4cc1-b12b-7159748430eb-images\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.608976 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.609182 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.609586 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-tls\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.609675 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-etcd-service-ca\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.609710 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.609831 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.609891 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a6eb50d-a8af-4e53-a129-aee15ae61037-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.609923 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d65kn\" (UID: \"ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.609965 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5380cb77-bf7a-4cc1-b12b-7159748430eb-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.610062 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-trusted-ca-bundle\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.610094 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhlmg\" (UniqueName: \"kubernetes.io/projected/e2308949-6865-4d3b-ad3b-1de5c42149b8-kube-api-access-fhlmg\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.610426 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-certificates\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.610544 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-trusted-ca\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.610693 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.610758 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b1a88b1-f5d6-4946-8dda-3defb18a63fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dqrt7\" (UID: \"1b1a88b1-f5d6-4946-8dda-3defb18a63fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.610783 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/056beb8e-ab30-48dc-b00e-6c261269431f-stats-auth\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.610804 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-secret-volume\") pod \"collect-profiles-29492505-22jdn\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.611141 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-config\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.611087 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-policies\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.611163 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.611339 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp5td\" (UniqueName: \"kubernetes.io/projected/8f8b8ad1-f276-4546-afd2-49f338f38c92-kube-api-access-lp5td\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.611365 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.611411 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f8b8ad1-f276-4546-afd2-49f338f38c92-serving-cert\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.611778 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f8b8ad1-f276-4546-afd2-49f338f38c92-service-ca-bundle\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.611787 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-etcd-service-ca\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.611936 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a6eb50d-a8af-4e53-a129-aee15ae61037-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.611967 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm2vh\" (UniqueName: \"kubernetes.io/projected/04162a6e-b772-45d4-9ec4-894e70fc95a2-kube-api-access-bm2vh\") pod \"machine-config-server-xhhs6\" (UID: \"04162a6e-b772-45d4-9ec4-894e70fc95a2\") " pod="openshift-machine-config-operator/machine-config-server-xhhs6" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612017 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvqjh\" (UniqueName: \"kubernetes.io/projected/827ee45d-1ade-46af-95fe-ab0e673f6dc1-kube-api-access-jvqjh\") pod \"dns-default-npwr7\" (UID: \"827ee45d-1ade-46af-95fe-ab0e673f6dc1\") " pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612040 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/066a7a5b-c610-4b2e-a2f6-2c90b997fbc9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-szkgj\" (UID: \"066a7a5b-c610-4b2e-a2f6-2c90b997fbc9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612078 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5380cb77-bf7a-4cc1-b12b-7159748430eb-proxy-tls\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612114 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612154 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/04162a6e-b772-45d4-9ec4-894e70fc95a2-node-bootstrap-token\") pod \"machine-config-server-xhhs6\" (UID: \"04162a6e-b772-45d4-9ec4-894e70fc95a2\") " pod="openshift-machine-config-operator/machine-config-server-xhhs6" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612193 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8228ba-8397-4400-b30f-07dcf24d6fb5-serving-cert\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612228 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-serving-cert\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612274 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dhb\" (UniqueName: \"kubernetes.io/projected/8d7bade4-c73a-419d-9c33-c30b0b7260ca-kube-api-access-v2dhb\") pod \"machine-config-controller-84d6567774-tlnvs\" (UID: \"8d7bade4-c73a-419d-9c33-c30b0b7260ca\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612406 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-925bm\" (UniqueName: \"kubernetes.io/projected/7a6eb50d-a8af-4e53-a129-aee15ae61037-kube-api-access-925bm\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612427 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-trusted-ca-bundle\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612438 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.611987 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612645 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7575d2-5f8d-44a1-90fb-653fe276f273-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n24nl\" (UID: \"ae7575d2-5f8d-44a1-90fb-653fe276f273\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612670 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7g5z\" (UniqueName: \"kubernetes.io/projected/066a7a5b-c610-4b2e-a2f6-2c90b997fbc9-kube-api-access-w7g5z\") pod \"multus-admission-controller-857f4d67dd-szkgj\" (UID: \"066a7a5b-c610-4b2e-a2f6-2c90b997fbc9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612723 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/841660c5-b152-467b-97d4-38b9a181d315-signing-key\") pod \"service-ca-9c57cc56f-bgfw4\" (UID: \"841660c5-b152-467b-97d4-38b9a181d315\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.612761 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e62d282c-a35b-42d6-a490-e11c0239b6c3-trusted-ca\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.613050 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/056beb8e-ab30-48dc-b00e-6c261269431f-metrics-certs\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.613083 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1a88b1-f5d6-4946-8dda-3defb18a63fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dqrt7\" (UID: \"1b1a88b1-f5d6-4946-8dda-3defb18a63fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.613129 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f440e60-e9e3-43ef-93ca-9b27adeac069-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wpzf9\" (UID: \"8f440e60-e9e3-43ef-93ca-9b27adeac069\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.613379 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.613406 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3887e56-f659-4a2f-ac29-e6841a2245da-serving-cert\") pod \"service-ca-operator-777779d784-f64jt\" (UID: \"c3887e56-f659-4a2f-ac29-e6841a2245da\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.613535 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/841660c5-b152-467b-97d4-38b9a181d315-signing-cabundle\") pod \"service-ca-9c57cc56f-bgfw4\" (UID: \"841660c5-b152-467b-97d4-38b9a181d315\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.613593 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f8b8ad1-f276-4546-afd2-49f338f38c92-service-ca-bundle\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.613789 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vtjk\" (UniqueName: \"kubernetes.io/projected/d2eb7aad-8e72-489c-a000-ef21c4d9589a-kube-api-access-9vtjk\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.613943 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/767d334b-3f70-4847-b45a-ccf0d7e2dc2b-profile-collector-cert\") pod \"catalog-operator-68c6474976-hmpmk\" (UID: \"767d334b-3f70-4847-b45a-ccf0d7e2dc2b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.614217 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6eb50d-a8af-4e53-a129-aee15ae61037-serving-cert\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.614696 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.614716 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.614799 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-service-ca\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.614887 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e01cae6-c0f6-4f51-ba69-6a162470b81c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2vbkh\" (UID: \"8e01cae6-c0f6-4f51-ba69-6a162470b81c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.615136 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.615336 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.615468 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a6eb50d-a8af-4e53-a129-aee15ae61037-audit-policies\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.615627 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e62d282c-a35b-42d6-a490-e11c0239b6c3-trusted-ca\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.615665 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8b8ad1-f276-4546-afd2-49f338f38c92-config\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.615978 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.616014 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkhrm\" (UniqueName: \"kubernetes.io/projected/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-kube-api-access-wkhrm\") pod \"marketplace-operator-79b997595-n7mdf\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.616237 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a6eb50d-a8af-4e53-a129-aee15ae61037-audit-policies\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.616428 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f8b8ad1-f276-4546-afd2-49f338f38c92-serving-cert\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.616557 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8b8ad1-f276-4546-afd2-49f338f38c92-config\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.617094 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-serving-cert\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.617107 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-service-ca\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.617360 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: E0127 21:49:48.617840 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:49.116102283 +0000 UTC m=+141.532123982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.617893 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm9zj\" (UniqueName: \"kubernetes.io/projected/8e01cae6-c0f6-4f51-ba69-6a162470b81c-kube-api-access-mm9zj\") pod \"cluster-samples-operator-665b6dd947-2vbkh\" (UID: \"8e01cae6-c0f6-4f51-ba69-6a162470b81c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618048 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzthm\" (UniqueName: \"kubernetes.io/projected/25eb3de0-78b3-4e89-a860-9f1778060c50-kube-api-access-fzthm\") pod \"olm-operator-6b444d44fb-qcx9g\" (UID: \"25eb3de0-78b3-4e89-a860-9f1778060c50\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618157 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7-metrics-tls\") pod \"dns-operator-744455d44c-qrccx\" (UID: \"a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618210 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2308949-6865-4d3b-ad3b-1de5c42149b8-images\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618230 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-tmpfs\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618257 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a6eb50d-a8af-4e53-a129-aee15ae61037-audit-dir\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618283 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6854s\" (UniqueName: \"kubernetes.io/projected/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-kube-api-access-6854s\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618315 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618342 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-config\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618369 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0612fd3-e6b4-43b1-8e66-d0bf17281248-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khmz4\" (UID: \"f0612fd3-e6b4-43b1-8e66-d0bf17281248\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618395 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-csi-data-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618419 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61adce3e-cfdd-4a33-b64d-f49069ef6469-trusted-ca\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618439 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd5t6\" (UniqueName: \"kubernetes.io/projected/3643de61-fe1e-4b5f-acef-ac477aa81f8a-kube-api-access-hd5t6\") pod \"ingress-canary-f5476\" (UID: \"3643de61-fe1e-4b5f-acef-ac477aa81f8a\") " pod="openshift-ingress-canary/ingress-canary-f5476" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618460 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/056beb8e-ab30-48dc-b00e-6c261269431f-service-ca-bundle\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618484 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.618817 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2308949-6865-4d3b-ad3b-1de5c42149b8-config\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.619231 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a6eb50d-a8af-4e53-a129-aee15ae61037-etcd-client\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.619592 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2308949-6865-4d3b-ad3b-1de5c42149b8-config\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.619789 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-config\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.619964 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-webhook-cert\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.619998 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfg9s\" (UniqueName: \"kubernetes.io/projected/767d334b-3f70-4847-b45a-ccf0d7e2dc2b-kube-api-access-kfg9s\") pod \"catalog-operator-68c6474976-hmpmk\" (UID: \"767d334b-3f70-4847-b45a-ccf0d7e2dc2b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.620057 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e62d282c-a35b-42d6-a490-e11c0239b6c3-metrics-tls\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.620343 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-oauth-config\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.620424 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-oauth-serving-cert\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.620454 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-socket-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.620491 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a6eb50d-a8af-4e53-a129-aee15ae61037-audit-dir\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.620970 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2308949-6865-4d3b-ad3b-1de5c42149b8-images\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.621054 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.621070 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-client-ca\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.621104 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e62d282c-a35b-42d6-a490-e11c0239b6c3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.621176 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-config\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.621205 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61adce3e-cfdd-4a33-b64d-f49069ef6469-config\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.621324 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-serving-cert\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.621348 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-etcd-ca\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.621792 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-config\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.621831 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a6eb50d-a8af-4e53-a129-aee15ae61037-encryption-config\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.621908 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61adce3e-cfdd-4a33-b64d-f49069ef6469-serving-cert\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.622000 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-client-ca\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.622061 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/04162a6e-b772-45d4-9ec4-894e70fc95a2-certs\") pod \"machine-config-server-xhhs6\" (UID: \"04162a6e-b772-45d4-9ec4-894e70fc95a2\") " pod="openshift-machine-config-operator/machine-config-server-xhhs6" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.622119 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25eb3de0-78b3-4e89-a860-9f1778060c50-srv-cert\") pod \"olm-operator-6b444d44fb-qcx9g\" (UID: \"25eb3de0-78b3-4e89-a860-9f1778060c50\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.622166 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-config-volume\") pod \"collect-profiles-29492505-22jdn\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.622119 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-etcd-ca\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.622395 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1a768c9-8a8e-412a-a377-6812b5aca206-config\") pod \"kube-apiserver-operator-766d6c64bb-drp7p\" (UID: \"d1a768c9-8a8e-412a-a377-6812b5aca206\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.622406 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-oauth-serving-cert\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.622490 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrrx2\" (UniqueName: \"kubernetes.io/projected/ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9-kube-api-access-qrrx2\") pod \"package-server-manager-789f6589d5-d65kn\" (UID: \"ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.622575 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98p87\" (UniqueName: \"kubernetes.io/projected/1b1a88b1-f5d6-4946-8dda-3defb18a63fd-kube-api-access-98p87\") pod \"openshift-controller-manager-operator-756b6f6bc6-dqrt7\" (UID: \"1b1a88b1-f5d6-4946-8dda-3defb18a63fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.622633 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-mountpoint-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.622787 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61adce3e-cfdd-4a33-b64d-f49069ef6469-config\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623082 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25eb3de0-78b3-4e89-a860-9f1778060c50-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qcx9g\" (UID: \"25eb3de0-78b3-4e89-a860-9f1778060c50\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623151 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-dir\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623176 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2308949-6865-4d3b-ad3b-1de5c42149b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623200 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw9xp\" (UniqueName: \"kubernetes.io/projected/c3887e56-f659-4a2f-ac29-e6841a2245da-kube-api-access-tw9xp\") pod \"service-ca-operator-777779d784-f64jt\" (UID: \"c3887e56-f659-4a2f-ac29-e6841a2245da\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623290 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbfsf\" (UniqueName: \"kubernetes.io/projected/6e8228ba-8397-4400-b30f-07dcf24d6fb5-kube-api-access-kbfsf\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623314 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-etcd-client\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623315 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-dir\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623338 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61adce3e-cfdd-4a33-b64d-f49069ef6469-trusted-ca\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623387 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1a768c9-8a8e-412a-a377-6812b5aca206-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-drp7p\" (UID: \"d1a768c9-8a8e-412a-a377-6812b5aca206\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623500 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8c7f\" (UniqueName: \"kubernetes.io/projected/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-kube-api-access-p8c7f\") pod \"collect-profiles-29492505-22jdn\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623629 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bznb\" (UniqueName: \"kubernetes.io/projected/ae7575d2-5f8d-44a1-90fb-653fe276f273-kube-api-access-9bznb\") pod \"kube-storage-version-migrator-operator-b67b599dd-n24nl\" (UID: \"ae7575d2-5f8d-44a1-90fb-653fe276f273\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623893 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w76mm\" (UniqueName: \"kubernetes.io/projected/5380cb77-bf7a-4cc1-b12b-7159748430eb-kube-api-access-w76mm\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.623930 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.625298 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.625461 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.627436 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-serving-cert\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.627571 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8228ba-8397-4400-b30f-07dcf24d6fb5-serving-cert\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.627871 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-oauth-config\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.628564 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e62d282c-a35b-42d6-a490-e11c0239b6c3-metrics-tls\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.629089 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2308949-6865-4d3b-ad3b-1de5c42149b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.629567 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-etcd-client\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.629787 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.630043 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a6eb50d-a8af-4e53-a129-aee15ae61037-encryption-config\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.630067 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.637653 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61adce3e-cfdd-4a33-b64d-f49069ef6469-serving-cert\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.644381 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz8bz\" (UniqueName: \"kubernetes.io/projected/f15cefaf-aacf-45a8-a2d5-8874dcf893b1-kube-api-access-qz8bz\") pod \"etcd-operator-b45778765-pslb5\" (UID: \"f15cefaf-aacf-45a8-a2d5-8874dcf893b1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.652259 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbsg5\" (UniqueName: \"kubernetes.io/projected/70091f5f-e06c-4cf3-8bc8-299f10207363-kube-api-access-kbsg5\") pod \"oauth-openshift-558db77b4-7x4wr\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.674745 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbzfx\" (UniqueName: \"kubernetes.io/projected/e62d282c-a35b-42d6-a490-e11c0239b6c3-kube-api-access-hbzfx\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.686574 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.697704 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8"] Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.708627 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg8wz\" (UniqueName: \"kubernetes.io/projected/b06a9990-b5a6-4198-b3da-22eb6df6692b-kube-api-access-wg8wz\") pod \"console-f9d7485db-s9tzw\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: W0127 21:49:48.718786 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04116321_6817_48ae_9107_cd7bac2addf3.slice/crio-41c1b2d1fa262a6a6927bd68c6af9bbcce8560b411a23a32fc00325aeda2a4a6 WatchSource:0}: Error finding container 41c1b2d1fa262a6a6927bd68c6af9bbcce8560b411a23a32fc00325aeda2a4a6: Status 404 returned error can't find the container with id 41c1b2d1fa262a6a6927bd68c6af9bbcce8560b411a23a32fc00325aeda2a4a6 Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724545 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724693 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9t9m\" (UniqueName: \"kubernetes.io/projected/841660c5-b152-467b-97d4-38b9a181d315-kube-api-access-c9t9m\") pod \"service-ca-9c57cc56f-bgfw4\" (UID: \"841660c5-b152-467b-97d4-38b9a181d315\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724727 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1a768c9-8a8e-412a-a377-6812b5aca206-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-drp7p\" (UID: \"d1a768c9-8a8e-412a-a377-6812b5aca206\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724757 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-registration-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724775 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k7cb\" (UniqueName: \"kubernetes.io/projected/a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7-kube-api-access-6k7cb\") pod \"dns-operator-744455d44c-qrccx\" (UID: \"a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724796 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/767d334b-3f70-4847-b45a-ccf0d7e2dc2b-srv-cert\") pod \"catalog-operator-68c6474976-hmpmk\" (UID: \"767d334b-3f70-4847-b45a-ccf0d7e2dc2b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724820 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5380cb77-bf7a-4cc1-b12b-7159748430eb-images\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724837 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d65kn\" (UID: \"ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724870 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5380cb77-bf7a-4cc1-b12b-7159748430eb-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724898 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b1a88b1-f5d6-4946-8dda-3defb18a63fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dqrt7\" (UID: \"1b1a88b1-f5d6-4946-8dda-3defb18a63fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724915 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/056beb8e-ab30-48dc-b00e-6c261269431f-stats-auth\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724928 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-secret-volume\") pod \"collect-profiles-29492505-22jdn\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724969 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm2vh\" (UniqueName: \"kubernetes.io/projected/04162a6e-b772-45d4-9ec4-894e70fc95a2-kube-api-access-bm2vh\") pod \"machine-config-server-xhhs6\" (UID: \"04162a6e-b772-45d4-9ec4-894e70fc95a2\") " pod="openshift-machine-config-operator/machine-config-server-xhhs6" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.724988 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvqjh\" (UniqueName: \"kubernetes.io/projected/827ee45d-1ade-46af-95fe-ab0e673f6dc1-kube-api-access-jvqjh\") pod \"dns-default-npwr7\" (UID: \"827ee45d-1ade-46af-95fe-ab0e673f6dc1\") " pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725007 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/066a7a5b-c610-4b2e-a2f6-2c90b997fbc9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-szkgj\" (UID: \"066a7a5b-c610-4b2e-a2f6-2c90b997fbc9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725026 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5380cb77-bf7a-4cc1-b12b-7159748430eb-proxy-tls\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725046 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/04162a6e-b772-45d4-9ec4-894e70fc95a2-node-bootstrap-token\") pod \"machine-config-server-xhhs6\" (UID: \"04162a6e-b772-45d4-9ec4-894e70fc95a2\") " pod="openshift-machine-config-operator/machine-config-server-xhhs6" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725074 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2dhb\" (UniqueName: \"kubernetes.io/projected/8d7bade4-c73a-419d-9c33-c30b0b7260ca-kube-api-access-v2dhb\") pod \"machine-config-controller-84d6567774-tlnvs\" (UID: \"8d7bade4-c73a-419d-9c33-c30b0b7260ca\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725102 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7575d2-5f8d-44a1-90fb-653fe276f273-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n24nl\" (UID: \"ae7575d2-5f8d-44a1-90fb-653fe276f273\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725118 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7g5z\" (UniqueName: \"kubernetes.io/projected/066a7a5b-c610-4b2e-a2f6-2c90b997fbc9-kube-api-access-w7g5z\") pod \"multus-admission-controller-857f4d67dd-szkgj\" (UID: \"066a7a5b-c610-4b2e-a2f6-2c90b997fbc9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725135 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/841660c5-b152-467b-97d4-38b9a181d315-signing-key\") pod \"service-ca-9c57cc56f-bgfw4\" (UID: \"841660c5-b152-467b-97d4-38b9a181d315\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725149 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/056beb8e-ab30-48dc-b00e-6c261269431f-metrics-certs\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725164 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1a88b1-f5d6-4946-8dda-3defb18a63fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dqrt7\" (UID: \"1b1a88b1-f5d6-4946-8dda-3defb18a63fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725181 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f440e60-e9e3-43ef-93ca-9b27adeac069-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wpzf9\" (UID: \"8f440e60-e9e3-43ef-93ca-9b27adeac069\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725199 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3887e56-f659-4a2f-ac29-e6841a2245da-serving-cert\") pod \"service-ca-operator-777779d784-f64jt\" (UID: \"c3887e56-f659-4a2f-ac29-e6841a2245da\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725215 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/841660c5-b152-467b-97d4-38b9a181d315-signing-cabundle\") pod \"service-ca-9c57cc56f-bgfw4\" (UID: \"841660c5-b152-467b-97d4-38b9a181d315\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725232 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vtjk\" (UniqueName: \"kubernetes.io/projected/d2eb7aad-8e72-489c-a000-ef21c4d9589a-kube-api-access-9vtjk\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725255 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/767d334b-3f70-4847-b45a-ccf0d7e2dc2b-profile-collector-cert\") pod \"catalog-operator-68c6474976-hmpmk\" (UID: \"767d334b-3f70-4847-b45a-ccf0d7e2dc2b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725292 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkhrm\" (UniqueName: \"kubernetes.io/projected/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-kube-api-access-wkhrm\") pod \"marketplace-operator-79b997595-n7mdf\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725313 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzthm\" (UniqueName: \"kubernetes.io/projected/25eb3de0-78b3-4e89-a860-9f1778060c50-kube-api-access-fzthm\") pod \"olm-operator-6b444d44fb-qcx9g\" (UID: \"25eb3de0-78b3-4e89-a860-9f1778060c50\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725332 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7-metrics-tls\") pod \"dns-operator-744455d44c-qrccx\" (UID: \"a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725349 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-tmpfs\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725365 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6854s\" (UniqueName: \"kubernetes.io/projected/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-kube-api-access-6854s\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725385 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0612fd3-e6b4-43b1-8e66-d0bf17281248-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khmz4\" (UID: \"f0612fd3-e6b4-43b1-8e66-d0bf17281248\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725400 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-csi-data-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725422 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd5t6\" (UniqueName: \"kubernetes.io/projected/3643de61-fe1e-4b5f-acef-ac477aa81f8a-kube-api-access-hd5t6\") pod \"ingress-canary-f5476\" (UID: \"3643de61-fe1e-4b5f-acef-ac477aa81f8a\") " pod="openshift-ingress-canary/ingress-canary-f5476" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725437 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/056beb8e-ab30-48dc-b00e-6c261269431f-service-ca-bundle\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725452 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-webhook-cert\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725473 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfg9s\" (UniqueName: \"kubernetes.io/projected/767d334b-3f70-4847-b45a-ccf0d7e2dc2b-kube-api-access-kfg9s\") pod \"catalog-operator-68c6474976-hmpmk\" (UID: \"767d334b-3f70-4847-b45a-ccf0d7e2dc2b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725490 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-socket-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725515 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/04162a6e-b772-45d4-9ec4-894e70fc95a2-certs\") pod \"machine-config-server-xhhs6\" (UID: \"04162a6e-b772-45d4-9ec4-894e70fc95a2\") " pod="openshift-machine-config-operator/machine-config-server-xhhs6" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725533 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25eb3de0-78b3-4e89-a860-9f1778060c50-srv-cert\") pod \"olm-operator-6b444d44fb-qcx9g\" (UID: \"25eb3de0-78b3-4e89-a860-9f1778060c50\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725549 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-config-volume\") pod \"collect-profiles-29492505-22jdn\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725565 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1a768c9-8a8e-412a-a377-6812b5aca206-config\") pod \"kube-apiserver-operator-766d6c64bb-drp7p\" (UID: \"d1a768c9-8a8e-412a-a377-6812b5aca206\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725581 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrrx2\" (UniqueName: \"kubernetes.io/projected/ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9-kube-api-access-qrrx2\") pod \"package-server-manager-789f6589d5-d65kn\" (UID: \"ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725598 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98p87\" (UniqueName: \"kubernetes.io/projected/1b1a88b1-f5d6-4946-8dda-3defb18a63fd-kube-api-access-98p87\") pod \"openshift-controller-manager-operator-756b6f6bc6-dqrt7\" (UID: \"1b1a88b1-f5d6-4946-8dda-3defb18a63fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725616 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-mountpoint-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725630 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25eb3de0-78b3-4e89-a860-9f1778060c50-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qcx9g\" (UID: \"25eb3de0-78b3-4e89-a860-9f1778060c50\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725647 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw9xp\" (UniqueName: \"kubernetes.io/projected/c3887e56-f659-4a2f-ac29-e6841a2245da-kube-api-access-tw9xp\") pod \"service-ca-operator-777779d784-f64jt\" (UID: \"c3887e56-f659-4a2f-ac29-e6841a2245da\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725667 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1a768c9-8a8e-412a-a377-6812b5aca206-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-drp7p\" (UID: \"d1a768c9-8a8e-412a-a377-6812b5aca206\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725682 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8c7f\" (UniqueName: \"kubernetes.io/projected/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-kube-api-access-p8c7f\") pod \"collect-profiles-29492505-22jdn\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725708 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bznb\" (UniqueName: \"kubernetes.io/projected/ae7575d2-5f8d-44a1-90fb-653fe276f273-kube-api-access-9bznb\") pod \"kube-storage-version-migrator-operator-b67b599dd-n24nl\" (UID: \"ae7575d2-5f8d-44a1-90fb-653fe276f273\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725724 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w76mm\" (UniqueName: \"kubernetes.io/projected/5380cb77-bf7a-4cc1-b12b-7159748430eb-kube-api-access-w76mm\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725740 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0612fd3-e6b4-43b1-8e66-d0bf17281248-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khmz4\" (UID: \"f0612fd3-e6b4-43b1-8e66-d0bf17281248\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725755 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-apiservice-cert\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725772 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/827ee45d-1ade-46af-95fe-ab0e673f6dc1-config-volume\") pod \"dns-default-npwr7\" (UID: \"827ee45d-1ade-46af-95fe-ab0e673f6dc1\") " pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:48 crc kubenswrapper[4803]: E0127 21:49:48.725832 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:49.225775593 +0000 UTC m=+141.641797292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725897 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvvb\" (UniqueName: \"kubernetes.io/projected/056beb8e-ab30-48dc-b00e-6c261269431f-kube-api-access-8xvvb\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725923 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl5cr\" (UniqueName: \"kubernetes.io/projected/511157a0-ff3f-4105-b425-81fe57ec64e0-kube-api-access-pl5cr\") pod \"migrator-59844c95c7-w264r\" (UID: \"511157a0-ff3f-4105-b425-81fe57ec64e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725940 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0612fd3-e6b4-43b1-8e66-d0bf17281248-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khmz4\" (UID: \"f0612fd3-e6b4-43b1-8e66-d0bf17281248\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725967 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3887e56-f659-4a2f-ac29-e6841a2245da-config\") pod \"service-ca-operator-777779d784-f64jt\" (UID: \"c3887e56-f659-4a2f-ac29-e6841a2245da\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725981 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-plugins-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.725999 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jlcm\" (UniqueName: \"kubernetes.io/projected/8f440e60-e9e3-43ef-93ca-9b27adeac069-kube-api-access-4jlcm\") pod \"control-plane-machine-set-operator-78cbb6b69f-wpzf9\" (UID: \"8f440e60-e9e3-43ef-93ca-9b27adeac069\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726013 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n7mdf\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726029 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3643de61-fe1e-4b5f-acef-ac477aa81f8a-cert\") pod \"ingress-canary-f5476\" (UID: \"3643de61-fe1e-4b5f-acef-ac477aa81f8a\") " pod="openshift-ingress-canary/ingress-canary-f5476" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726044 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d7bade4-c73a-419d-9c33-c30b0b7260ca-proxy-tls\") pod \"machine-config-controller-84d6567774-tlnvs\" (UID: \"8d7bade4-c73a-419d-9c33-c30b0b7260ca\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726102 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d7bade4-c73a-419d-9c33-c30b0b7260ca-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tlnvs\" (UID: \"8d7bade4-c73a-419d-9c33-c30b0b7260ca\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726119 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5380cb77-bf7a-4cc1-b12b-7159748430eb-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726137 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n7mdf\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726163 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae7575d2-5f8d-44a1-90fb-653fe276f273-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n24nl\" (UID: \"ae7575d2-5f8d-44a1-90fb-653fe276f273\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726182 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/056beb8e-ab30-48dc-b00e-6c261269431f-default-certificate\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726197 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/827ee45d-1ade-46af-95fe-ab0e673f6dc1-metrics-tls\") pod \"dns-default-npwr7\" (UID: \"827ee45d-1ade-46af-95fe-ab0e673f6dc1\") " pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726317 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-plugins-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726468 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/827ee45d-1ade-46af-95fe-ab0e673f6dc1-config-volume\") pod \"dns-default-npwr7\" (UID: \"827ee45d-1ade-46af-95fe-ab0e673f6dc1\") " pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726529 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-socket-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726587 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5380cb77-bf7a-4cc1-b12b-7159748430eb-images\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.726666 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-tmpfs\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.727250 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3887e56-f659-4a2f-ac29-e6841a2245da-config\") pod \"service-ca-operator-777779d784-f64jt\" (UID: \"c3887e56-f659-4a2f-ac29-e6841a2245da\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.727968 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/841660c5-b152-467b-97d4-38b9a181d315-signing-cabundle\") pod \"service-ca-9c57cc56f-bgfw4\" (UID: \"841660c5-b152-467b-97d4-38b9a181d315\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.729042 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/827ee45d-1ade-46af-95fe-ab0e673f6dc1-metrics-tls\") pod \"dns-default-npwr7\" (UID: \"827ee45d-1ade-46af-95fe-ab0e673f6dc1\") " pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.731142 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-registration-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.731423 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-mountpoint-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.731868 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d2eb7aad-8e72-489c-a000-ef21c4d9589a-csi-data-dir\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.733705 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1a768c9-8a8e-412a-a377-6812b5aca206-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-drp7p\" (UID: \"d1a768c9-8a8e-412a-a377-6812b5aca206\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.733955 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n7mdf\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.734324 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7575d2-5f8d-44a1-90fb-653fe276f273-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n24nl\" (UID: \"ae7575d2-5f8d-44a1-90fb-653fe276f273\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.734610 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmvfj\" (UniqueName: \"kubernetes.io/projected/1bc7c7ba-cad8-4f64-836e-a564b254e1fd-kube-api-access-xmvfj\") pod \"downloads-7954f5f757-9drvm\" (UID: \"1bc7c7ba-cad8-4f64-836e-a564b254e1fd\") " pod="openshift-console/downloads-7954f5f757-9drvm" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.737821 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/04162a6e-b772-45d4-9ec4-894e70fc95a2-node-bootstrap-token\") pod \"machine-config-server-xhhs6\" (UID: \"04162a6e-b772-45d4-9ec4-894e70fc95a2\") " pod="openshift-machine-config-operator/machine-config-server-xhhs6" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.738728 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25eb3de0-78b3-4e89-a860-9f1778060c50-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qcx9g\" (UID: \"25eb3de0-78b3-4e89-a860-9f1778060c50\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.739633 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d65kn\" (UID: \"ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.739688 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0612fd3-e6b4-43b1-8e66-d0bf17281248-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khmz4\" (UID: \"f0612fd3-e6b4-43b1-8e66-d0bf17281248\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.742276 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5380cb77-bf7a-4cc1-b12b-7159748430eb-proxy-tls\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.742371 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1a88b1-f5d6-4946-8dda-3defb18a63fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dqrt7\" (UID: \"1b1a88b1-f5d6-4946-8dda-3defb18a63fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.742601 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0612fd3-e6b4-43b1-8e66-d0bf17281248-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khmz4\" (UID: \"f0612fd3-e6b4-43b1-8e66-d0bf17281248\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.742615 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae7575d2-5f8d-44a1-90fb-653fe276f273-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n24nl\" (UID: \"ae7575d2-5f8d-44a1-90fb-653fe276f273\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.743087 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n7mdf\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.743741 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/04162a6e-b772-45d4-9ec4-894e70fc95a2-certs\") pod \"machine-config-server-xhhs6\" (UID: \"04162a6e-b772-45d4-9ec4-894e70fc95a2\") " pod="openshift-machine-config-operator/machine-config-server-xhhs6" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.744195 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1a768c9-8a8e-412a-a377-6812b5aca206-config\") pod \"kube-apiserver-operator-766d6c64bb-drp7p\" (UID: \"d1a768c9-8a8e-412a-a377-6812b5aca206\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.745180 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d7bade4-c73a-419d-9c33-c30b0b7260ca-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tlnvs\" (UID: \"8d7bade4-c73a-419d-9c33-c30b0b7260ca\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.745450 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/767d334b-3f70-4847-b45a-ccf0d7e2dc2b-profile-collector-cert\") pod \"catalog-operator-68c6474976-hmpmk\" (UID: \"767d334b-3f70-4847-b45a-ccf0d7e2dc2b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.745510 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3887e56-f659-4a2f-ac29-e6841a2245da-serving-cert\") pod \"service-ca-operator-777779d784-f64jt\" (UID: \"c3887e56-f659-4a2f-ac29-e6841a2245da\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.745543 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-config-volume\") pod \"collect-profiles-29492505-22jdn\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.746080 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/767d334b-3f70-4847-b45a-ccf0d7e2dc2b-srv-cert\") pod \"catalog-operator-68c6474976-hmpmk\" (UID: \"767d334b-3f70-4847-b45a-ccf0d7e2dc2b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.746395 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b1a88b1-f5d6-4946-8dda-3defb18a63fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dqrt7\" (UID: \"1b1a88b1-f5d6-4946-8dda-3defb18a63fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.746803 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3643de61-fe1e-4b5f-acef-ac477aa81f8a-cert\") pod \"ingress-canary-f5476\" (UID: \"3643de61-fe1e-4b5f-acef-ac477aa81f8a\") " pod="openshift-ingress-canary/ingress-canary-f5476" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.747260 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/056beb8e-ab30-48dc-b00e-6c261269431f-service-ca-bundle\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.749484 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/056beb8e-ab30-48dc-b00e-6c261269431f-metrics-certs\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.750098 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/056beb8e-ab30-48dc-b00e-6c261269431f-default-certificate\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.752139 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-apiservice-cert\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.752455 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-webhook-cert\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.752577 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7-metrics-tls\") pod \"dns-operator-744455d44c-qrccx\" (UID: \"a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.752644 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f440e60-e9e3-43ef-93ca-9b27adeac069-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wpzf9\" (UID: \"8f440e60-e9e3-43ef-93ca-9b27adeac069\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.752686 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/066a7a5b-c610-4b2e-a2f6-2c90b997fbc9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-szkgj\" (UID: \"066a7a5b-c610-4b2e-a2f6-2c90b997fbc9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.753272 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25eb3de0-78b3-4e89-a860-9f1778060c50-srv-cert\") pod \"olm-operator-6b444d44fb-qcx9g\" (UID: \"25eb3de0-78b3-4e89-a860-9f1778060c50\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.753365 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkg89\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-kube-api-access-hkg89\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.754000 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-secret-volume\") pod \"collect-profiles-29492505-22jdn\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.754913 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/841660c5-b152-467b-97d4-38b9a181d315-signing-key\") pod \"service-ca-9c57cc56f-bgfw4\" (UID: \"841660c5-b152-467b-97d4-38b9a181d315\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.757210 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d7bade4-c73a-419d-9c33-c30b0b7260ca-proxy-tls\") pod \"machine-config-controller-84d6567774-tlnvs\" (UID: \"8d7bade4-c73a-419d-9c33-c30b0b7260ca\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.757445 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-stngg"] Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.761430 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/056beb8e-ab30-48dc-b00e-6c261269431f-stats-auth\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.770488 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-bound-sa-token\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.798221 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9drvm" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.799891 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p5xw\" (UniqueName: \"kubernetes.io/projected/61adce3e-cfdd-4a33-b64d-f49069ef6469-kube-api-access-5p5xw\") pod \"console-operator-58897d9998-h9nvv\" (UID: \"61adce3e-cfdd-4a33-b64d-f49069ef6469\") " pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.815527 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhlmg\" (UniqueName: \"kubernetes.io/projected/e2308949-6865-4d3b-ad3b-1de5c42149b8-kube-api-access-fhlmg\") pod \"machine-api-operator-5694c8668f-8lpmj\" (UID: \"e2308949-6865-4d3b-ad3b-1de5c42149b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.825655 4803 csr.go:261] certificate signing request csr-ffmc8 is approved, waiting to be issued Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.827298 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:48 crc kubenswrapper[4803]: E0127 21:49:48.827916 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:49.32790164 +0000 UTC m=+141.743923339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.833628 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.834222 4803 csr.go:257] certificate signing request csr-ffmc8 is issued Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.840633 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.844768 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp5td\" (UniqueName: \"kubernetes.io/projected/8f8b8ad1-f276-4546-afd2-49f338f38c92-kube-api-access-lp5td\") pod \"authentication-operator-69f744f599-kdr8w\" (UID: \"8f8b8ad1-f276-4546-afd2-49f338f38c92\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.846467 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7"] Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.850011 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-925bm\" (UniqueName: \"kubernetes.io/projected/7a6eb50d-a8af-4e53-a129-aee15ae61037-kube-api-access-925bm\") pod \"apiserver-7bbb656c7d-th8dv\" (UID: \"7a6eb50d-a8af-4e53-a129-aee15ae61037\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.852696 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:48 crc kubenswrapper[4803]: W0127 21:49:48.860648 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda87a7bd1_74f2_4c14_a3c0_adf951393f10.slice/crio-a867006b942b2ab8c0fbb8ba550aaae5b70cea4483575c5dcb2ff2e621b7cdb4 WatchSource:0}: Error finding container a867006b942b2ab8c0fbb8ba550aaae5b70cea4483575c5dcb2ff2e621b7cdb4: Status 404 returned error can't find the container with id a867006b942b2ab8c0fbb8ba550aaae5b70cea4483575c5dcb2ff2e621b7cdb4 Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.863043 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.870584 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm9zj\" (UniqueName: \"kubernetes.io/projected/8e01cae6-c0f6-4f51-ba69-6a162470b81c-kube-api-access-mm9zj\") pod \"cluster-samples-operator-665b6dd947-2vbkh\" (UID: \"8e01cae6-c0f6-4f51-ba69-6a162470b81c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.890466 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e62d282c-a35b-42d6-a490-e11c0239b6c3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k88zf\" (UID: \"e62d282c-a35b-42d6-a490-e11c0239b6c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.897970 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq"] Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.903528 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.910473 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.912987 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbfsf\" (UniqueName: \"kubernetes.io/projected/6e8228ba-8397-4400-b30f-07dcf24d6fb5-kube-api-access-kbfsf\") pod \"controller-manager-879f6c89f-74666\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:48 crc kubenswrapper[4803]: W0127 21:49:48.922220 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc396037_51ea_4671_bc9d_821a5505ace9.slice/crio-da765f13f65a5a1d0a18f4f307406e304012ee649f152dab1c47752f93b77130 WatchSource:0}: Error finding container da765f13f65a5a1d0a18f4f307406e304012ee649f152dab1c47752f93b77130: Status 404 returned error can't find the container with id da765f13f65a5a1d0a18f4f307406e304012ee649f152dab1c47752f93b77130 Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.925082 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.927946 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:48 crc kubenswrapper[4803]: E0127 21:49:48.928369 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:49.428352753 +0000 UTC m=+141.844374452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.955833 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k7cb\" (UniqueName: \"kubernetes.io/projected/a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7-kube-api-access-6k7cb\") pod \"dns-operator-744455d44c-qrccx\" (UID: \"a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7\") " pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.975912 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9t9m\" (UniqueName: \"kubernetes.io/projected/841660c5-b152-467b-97d4-38b9a181d315-kube-api-access-c9t9m\") pod \"service-ca-9c57cc56f-bgfw4\" (UID: \"841660c5-b152-467b-97d4-38b9a181d315\") " pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.997472 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 21:49:48 crc kubenswrapper[4803]: I0127 21:49:48.999042 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvvb\" (UniqueName: \"kubernetes.io/projected/056beb8e-ab30-48dc-b00e-6c261269431f-kube-api-access-8xvvb\") pod \"router-default-5444994796-mgtlh\" (UID: \"056beb8e-ab30-48dc-b00e-6c261269431f\") " pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.017597 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl5cr\" (UniqueName: \"kubernetes.io/projected/511157a0-ff3f-4105-b425-81fe57ec64e0-kube-api-access-pl5cr\") pod \"migrator-59844c95c7-w264r\" (UID: \"511157a0-ff3f-4105-b425-81fe57ec64e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.024274 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.029651 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:49 crc kubenswrapper[4803]: E0127 21:49:49.030064 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:49.530048829 +0000 UTC m=+141.946070528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.036351 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0612fd3-e6b4-43b1-8e66-d0bf17281248-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khmz4\" (UID: \"f0612fd3-e6b4-43b1-8e66-d0bf17281248\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.041024 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9drvm"] Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.056185 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw9xp\" (UniqueName: \"kubernetes.io/projected/c3887e56-f659-4a2f-ac29-e6841a2245da-kube-api-access-tw9xp\") pod \"service-ca-operator-777779d784-f64jt\" (UID: \"c3887e56-f659-4a2f-ac29-e6841a2245da\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.060393 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.060911 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.073331 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6854s\" (UniqueName: \"kubernetes.io/projected/31c328be-cd7e-48a1-bb8d-086bbe5f1dd6-kube-api-access-6854s\") pod \"packageserver-d55dfcdfc-dfdfn\" (UID: \"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.076220 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.090516 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vtjk\" (UniqueName: \"kubernetes.io/projected/d2eb7aad-8e72-489c-a000-ef21c4d9589a-kube-api-access-9vtjk\") pod \"csi-hostpathplugin-jh44p\" (UID: \"d2eb7aad-8e72-489c-a000-ef21c4d9589a\") " pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.091563 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.107263 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jh44p" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.125338 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jlcm\" (UniqueName: \"kubernetes.io/projected/8f440e60-e9e3-43ef-93ca-9b27adeac069-kube-api-access-4jlcm\") pod \"control-plane-machine-set-operator-78cbb6b69f-wpzf9\" (UID: \"8f440e60-e9e3-43ef-93ca-9b27adeac069\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.130403 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:49 crc kubenswrapper[4803]: E0127 21:49:49.130920 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:49.630901063 +0000 UTC m=+142.046922762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.135477 4803 generic.go:334] "Generic (PLEG): container finished" podID="02c1fd2d-3326-44dc-9353-1c19a701826c" containerID="ea939563af9d52f2f4edf618845efef791b848167f0d5ab62bae5e2650cd5869" exitCode=0 Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.135519 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" event={"ID":"02c1fd2d-3326-44dc-9353-1c19a701826c","Type":"ContainerDied","Data":"ea939563af9d52f2f4edf618845efef791b848167f0d5ab62bae5e2650cd5869"} Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.137166 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" event={"ID":"fc396037-51ea-4671-bc9d-821a5505ace9","Type":"ContainerStarted","Data":"da765f13f65a5a1d0a18f4f307406e304012ee649f152dab1c47752f93b77130"} Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.138378 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" event={"ID":"04116321-6817-48ae-9107-cd7bac2addf3","Type":"ContainerStarted","Data":"ba420a59f8d78791662570590d369450222fa712c96fcc31477eda53eb3f8882"} Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.138410 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" event={"ID":"04116321-6817-48ae-9107-cd7bac2addf3","Type":"ContainerStarted","Data":"41c1b2d1fa262a6a6927bd68c6af9bbcce8560b411a23a32fc00325aeda2a4a6"} Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.148938 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2dhb\" (UniqueName: \"kubernetes.io/projected/8d7bade4-c73a-419d-9c33-c30b0b7260ca-kube-api-access-v2dhb\") pod \"machine-config-controller-84d6567774-tlnvs\" (UID: \"8d7bade4-c73a-419d-9c33-c30b0b7260ca\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.150306 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" event={"ID":"5bc2ab0a-3831-417d-95cd-f5e392217120","Type":"ContainerStarted","Data":"6e370e9ca28ee9e817f40992d0c53fb7903f3fde2ddaef66a59c5029c67ed010"} Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.150369 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" event={"ID":"5bc2ab0a-3831-417d-95cd-f5e392217120","Type":"ContainerStarted","Data":"006a4c63972c80a5f5c2c4ea4ab5d22d89bb44fd750efa92305d7b54c4fc937e"} Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.152051 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1a768c9-8a8e-412a-a377-6812b5aca206-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-drp7p\" (UID: \"d1a768c9-8a8e-412a-a377-6812b5aca206\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.153969 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" event={"ID":"a87a7bd1-74f2-4c14-a3c0-adf951393f10","Type":"ContainerStarted","Data":"bca8b21cdc4fe8e4aff419779bb73872686c2a1ff5c7bf52a7f75a4be454cefe"} Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.154024 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" event={"ID":"a87a7bd1-74f2-4c14-a3c0-adf951393f10","Type":"ContainerStarted","Data":"a867006b942b2ab8c0fbb8ba550aaae5b70cea4483575c5dcb2ff2e621b7cdb4"} Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.159446 4803 generic.go:334] "Generic (PLEG): container finished" podID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerID="5fe1abdc2b6a0b36e76da0ee99a6171da2378b161b099615eca0511246eaa6ff" exitCode=0 Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.159494 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" event={"ID":"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2","Type":"ContainerDied","Data":"5fe1abdc2b6a0b36e76da0ee99a6171da2378b161b099615eca0511246eaa6ff"} Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.159529 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" event={"ID":"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2","Type":"ContainerStarted","Data":"8f559a5747216cbe4d906ceffafa50100ae54d9b702f68c9f27f9f26d01a08c9"} Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.172322 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8c7f\" (UniqueName: \"kubernetes.io/projected/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-kube-api-access-p8c7f\") pod \"collect-profiles-29492505-22jdn\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.193956 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bznb\" (UniqueName: \"kubernetes.io/projected/ae7575d2-5f8d-44a1-90fb-653fe276f273-kube-api-access-9bznb\") pod \"kube-storage-version-migrator-operator-b67b599dd-n24nl\" (UID: \"ae7575d2-5f8d-44a1-90fb-653fe276f273\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.224797 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w76mm\" (UniqueName: \"kubernetes.io/projected/5380cb77-bf7a-4cc1-b12b-7159748430eb-kube-api-access-w76mm\") pod \"machine-config-operator-74547568cd-7lfg2\" (UID: \"5380cb77-bf7a-4cc1-b12b-7159748430eb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.229386 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.235038 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:49 crc kubenswrapper[4803]: E0127 21:49:49.237578 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:49.737555332 +0000 UTC m=+142.153577031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.238811 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7g5z\" (UniqueName: \"kubernetes.io/projected/066a7a5b-c610-4b2e-a2f6-2c90b997fbc9-kube-api-access-w7g5z\") pod \"multus-admission-controller-857f4d67dd-szkgj\" (UID: \"066a7a5b-c610-4b2e-a2f6-2c90b997fbc9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.240028 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.261753 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrrx2\" (UniqueName: \"kubernetes.io/projected/ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9-kube-api-access-qrrx2\") pod \"package-server-manager-789f6589d5-d65kn\" (UID: \"ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.266373 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.274767 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.275050 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfg9s\" (UniqueName: \"kubernetes.io/projected/767d334b-3f70-4847-b45a-ccf0d7e2dc2b-kube-api-access-kfg9s\") pod \"catalog-operator-68c6474976-hmpmk\" (UID: \"767d334b-3f70-4847-b45a-ccf0d7e2dc2b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.290261 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.306040 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.309739 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98p87\" (UniqueName: \"kubernetes.io/projected/1b1a88b1-f5d6-4946-8dda-3defb18a63fd-kube-api-access-98p87\") pod \"openshift-controller-manager-operator-756b6f6bc6-dqrt7\" (UID: \"1b1a88b1-f5d6-4946-8dda-3defb18a63fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.312260 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.320887 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7x4wr"] Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.321230 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.325082 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.332603 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkhrm\" (UniqueName: \"kubernetes.io/projected/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-kube-api-access-wkhrm\") pod \"marketplace-operator-79b997595-n7mdf\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.333535 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzthm\" (UniqueName: \"kubernetes.io/projected/25eb3de0-78b3-4e89-a860-9f1778060c50-kube-api-access-fzthm\") pod \"olm-operator-6b444d44fb-qcx9g\" (UID: \"25eb3de0-78b3-4e89-a860-9f1778060c50\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.337217 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.338017 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" Jan 27 21:49:49 crc kubenswrapper[4803]: E0127 21:49:49.338233 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:49.83820777 +0000 UTC m=+142.254229459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.347457 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.356657 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.357955 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-h9nvv"] Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.360731 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvqjh\" (UniqueName: \"kubernetes.io/projected/827ee45d-1ade-46af-95fe-ab0e673f6dc1-kube-api-access-jvqjh\") pod \"dns-default-npwr7\" (UID: \"827ee45d-1ade-46af-95fe-ab0e673f6dc1\") " pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.367499 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.375313 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm2vh\" (UniqueName: \"kubernetes.io/projected/04162a6e-b772-45d4-9ec4-894e70fc95a2-kube-api-access-bm2vh\") pod \"machine-config-server-xhhs6\" (UID: \"04162a6e-b772-45d4-9ec4-894e70fc95a2\") " pod="openshift-machine-config-operator/machine-config-server-xhhs6" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.383250 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.392741 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd5t6\" (UniqueName: \"kubernetes.io/projected/3643de61-fe1e-4b5f-acef-ac477aa81f8a-kube-api-access-hd5t6\") pod \"ingress-canary-f5476\" (UID: \"3643de61-fe1e-4b5f-acef-ac477aa81f8a\") " pod="openshift-ingress-canary/ingress-canary-f5476" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.427367 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xhhs6" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.431761 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.438830 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:49 crc kubenswrapper[4803]: E0127 21:49:49.439203 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:49.939189997 +0000 UTC m=+142.355211686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.442438 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f5476" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.456405 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8lpmj"] Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.458717 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv"] Jan 27 21:49:49 crc kubenswrapper[4803]: W0127 21:49:49.486048 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61adce3e_cfdd_4a33_b64d_f49069ef6469.slice/crio-1440aac9416da22764516ba34fbe9559dd08bc77f7fe1ce1ced355d071f56f15 WatchSource:0}: Error finding container 1440aac9416da22764516ba34fbe9559dd08bc77f7fe1ce1ced355d071f56f15: Status 404 returned error can't find the container with id 1440aac9416da22764516ba34fbe9559dd08bc77f7fe1ce1ced355d071f56f15 Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.540739 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:49 crc kubenswrapper[4803]: E0127 21:49:49.541428 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.041399708 +0000 UTC m=+142.457421407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.545045 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-s9tzw"] Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.550785 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.564468 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kdr8w"] Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.564526 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf"] Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.580263 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh"] Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.582736 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pslb5"] Jan 27 21:49:49 crc kubenswrapper[4803]: W0127 21:49:49.592191 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2308949_6865_4d3b_ad3b_1de5c42149b8.slice/crio-0c4fb8b157d659b0a6738348b56ad6b90246075f9c7036eb2e69b2d78110ff7f WatchSource:0}: Error finding container 0c4fb8b157d659b0a6738348b56ad6b90246075f9c7036eb2e69b2d78110ff7f: Status 404 returned error can't find the container with id 0c4fb8b157d659b0a6738348b56ad6b90246075f9c7036eb2e69b2d78110ff7f Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.599861 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bgfw4"] Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.634138 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:49 crc kubenswrapper[4803]: W0127 21:49:49.636288 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf15cefaf_aacf_45a8_a2d5_8874dcf893b1.slice/crio-97508d5079435ea5837b173bd559c9599b069955517f1fa5d31a1fd708014229 WatchSource:0}: Error finding container 97508d5079435ea5837b173bd559c9599b069955517f1fa5d31a1fd708014229: Status 404 returned error can't find the container with id 97508d5079435ea5837b173bd559c9599b069955517f1fa5d31a1fd708014229 Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.642916 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:49 crc kubenswrapper[4803]: E0127 21:49:49.643275 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.143262678 +0000 UTC m=+142.559284377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.744316 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:49 crc kubenswrapper[4803]: E0127 21:49:49.744728 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.244704277 +0000 UTC m=+142.660725976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.835932 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 21:44:48 +0000 UTC, rotation deadline is 2026-11-10 02:16:38.162129833 +0000 UTC Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.835992 4803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6868h26m48.326140117s for next certificate rotation Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.847935 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:49 crc kubenswrapper[4803]: E0127 21:49:49.848261 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.348249303 +0000 UTC m=+142.764271002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.948654 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:49 crc kubenswrapper[4803]: E0127 21:49:49.948890 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.448824849 +0000 UTC m=+142.864846558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:49 crc kubenswrapper[4803]: I0127 21:49:49.949359 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:49 crc kubenswrapper[4803]: E0127 21:49:49.949799 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.449784004 +0000 UTC m=+142.865805703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.000189 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f64jt"] Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.051701 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.051977 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.551956434 +0000 UTC m=+142.967978133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.055928 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.056275 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.556263908 +0000 UTC m=+142.972285607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.162145 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.162469 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.662442484 +0000 UTC m=+143.078464183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.162579 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.162982 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.662968239 +0000 UTC m=+143.078989928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.197323 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" event={"ID":"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2","Type":"ContainerStarted","Data":"cb4ba389c387b989d42589e012b26e5087e092983e020a588397aa541d65796f"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.198964 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.201206 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jh44p"] Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.208132 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" event={"ID":"fc396037-51ea-4671-bc9d-821a5505ace9","Type":"ContainerStarted","Data":"23c065fd4f4c1811f7bd7fc7b50356ca0e625f9fde9a801955a9fc132f2d3e28"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.209314 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.209672 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn"] Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.217047 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mgtlh" event={"ID":"056beb8e-ab30-48dc-b00e-6c261269431f","Type":"ContainerStarted","Data":"21f73c45e2f9012a699b50af081501f3fc1d57615e96de8b16ffb2f2ceadddf4"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.217171 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mgtlh" event={"ID":"056beb8e-ab30-48dc-b00e-6c261269431f","Type":"ContainerStarted","Data":"99e434f6ed70a9adc779d9644123427ab6f36ed8410919a430aa29126916ecb6"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.228732 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" event={"ID":"e62d282c-a35b-42d6-a490-e11c0239b6c3","Type":"ContainerStarted","Data":"f9deaf47ffbc47b2f6908f2ccfe8d7444637452e97a674d339e4fa2b7133a623"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.239194 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xhhs6" event={"ID":"04162a6e-b772-45d4-9ec4-894e70fc95a2","Type":"ContainerStarted","Data":"52fcd8e316b3a165812af7b52a4dc02d7c5ac0ef2f64c4374864302f82a60134"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.258624 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" event={"ID":"841660c5-b152-467b-97d4-38b9a181d315","Type":"ContainerStarted","Data":"ca33bcfb6bfad52771c4aa373692cf7d5b667af701153e2a9656c8709b8530c6"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.260881 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" event={"ID":"f15cefaf-aacf-45a8-a2d5-8874dcf893b1","Type":"ContainerStarted","Data":"97508d5079435ea5837b173bd559c9599b069955517f1fa5d31a1fd708014229"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.267597 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.268225 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.268255 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.7682308 +0000 UTC m=+143.184252499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.274197 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" event={"ID":"61adce3e-cfdd-4a33-b64d-f49069ef6469","Type":"ContainerStarted","Data":"1440aac9416da22764516ba34fbe9559dd08bc77f7fe1ce1ced355d071f56f15"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.297361 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" event={"ID":"70091f5f-e06c-4cf3-8bc8-299f10207363","Type":"ContainerStarted","Data":"5b338dca76870e9c377291e2af94b96822c7715b3a6fdc0306a22ccb8253ccd0"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.370463 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.372308 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.872291519 +0000 UTC m=+143.288313218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.449767 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-s9tzw" event={"ID":"b06a9990-b5a6-4198-b3da-22eb6df6692b","Type":"ContainerStarted","Data":"8d87e246a5546a1e6a10cf8d381766667ac0e1a83454f068bce1bb32f09dbf1e"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.449796 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" event={"ID":"7a6eb50d-a8af-4e53-a129-aee15ae61037","Type":"ContainerStarted","Data":"8889b93ee3ed249fcccf345e4c2915f50e71510a828b27a6ea7c233000cc41c9"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.449810 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9drvm" Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.449820 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" event={"ID":"8f8b8ad1-f276-4546-afd2-49f338f38c92","Type":"ContainerStarted","Data":"418d47c5c10472d7f021aee88b2c6f07c5864937ce6581e8dcfb0de218d22f49"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.449829 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9drvm" event={"ID":"1bc7c7ba-cad8-4f64-836e-a564b254e1fd","Type":"ContainerStarted","Data":"9c1c476365c93b00790faff6079d8ec328f7094e2a3680b8e68886f05063e41d"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.449838 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9drvm" event={"ID":"1bc7c7ba-cad8-4f64-836e-a564b254e1fd","Type":"ContainerStarted","Data":"418a1a0d116a709b0e816b52ec715f496bd046f8e223b30f1943edbc32ca29bf"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.449861 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" event={"ID":"02c1fd2d-3326-44dc-9353-1c19a701826c","Type":"ContainerStarted","Data":"31fa036e1a9d148c6685eb13eaa290717a99b8f33cffae57f4cbc6cb91262238"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.467132 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" event={"ID":"e2308949-6865-4d3b-ad3b-1de5c42149b8","Type":"ContainerStarted","Data":"0c4fb8b157d659b0a6738348b56ad6b90246075f9c7036eb2e69b2d78110ff7f"} Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.473657 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.474057 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:50.974040788 +0000 UTC m=+143.390062487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.503694 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.503750 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.519751 4803 patch_prober.go:28] interesting pod/downloads-7954f5f757-9drvm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.519833 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9drvm" podUID="1bc7c7ba-cad8-4f64-836e-a564b254e1fd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.578636 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.580820 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:51.080803749 +0000 UTC m=+143.496825548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.613051 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxhm8" podStartSLOduration=122.613034029 podStartE2EDuration="2m2.613034029s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:50.541473868 +0000 UTC m=+142.957495567" watchObservedRunningTime="2026-01-27 21:49:50.613034029 +0000 UTC m=+143.029055728" Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.633591 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smwn2" podStartSLOduration=122.633575079 podStartE2EDuration="2m2.633575079s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:50.633150516 +0000 UTC m=+143.049172215" watchObservedRunningTime="2026-01-27 21:49:50.633575079 +0000 UTC m=+143.049596778" Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.680090 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.680674 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:51.180659385 +0000 UTC m=+143.596681084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.697282 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl"] Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.697346 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:49:50 crc kubenswrapper[4803]: W0127 21:49:50.700933 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae7575d2_5f8d_44a1_90fb_653fe276f273.slice/crio-778fdbcef138ae30c91788a5bf1da6c07547184979df6f7b0cc5612eb7d70e57 WatchSource:0}: Error finding container 778fdbcef138ae30c91788a5bf1da6c07547184979df6f7b0cc5612eb7d70e57: Status 404 returned error can't find the container with id 778fdbcef138ae30c91788a5bf1da6c07547184979df6f7b0cc5612eb7d70e57 Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.732108 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-74666"] Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.741626 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qrccx"] Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.743565 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk"] Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.765953 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2"] Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.783855 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.784196 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:51.284183571 +0000 UTC m=+143.700205270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.794503 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4"] Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.890450 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.891460 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:51.391440496 +0000 UTC m=+143.807462195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.964300 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p"] Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.992655 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xqpl4" podStartSLOduration=122.992631629 podStartE2EDuration="2m2.992631629s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:50.965252487 +0000 UTC m=+143.381274186" watchObservedRunningTime="2026-01-27 21:49:50.992631629 +0000 UTC m=+143.408653328" Jan 27 21:49:50 crc kubenswrapper[4803]: I0127 21:49:50.997374 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:50 crc kubenswrapper[4803]: E0127 21:49:50.997781 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:51.497766806 +0000 UTC m=+143.913788495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.002309 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn"] Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.002359 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs"] Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.032304 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4nmr7" podStartSLOduration=123.032279797 podStartE2EDuration="2m3.032279797s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.02940186 +0000 UTC m=+143.445423569" watchObservedRunningTime="2026-01-27 21:49:51.032279797 +0000 UTC m=+143.448301496" Jan 27 21:49:51 crc kubenswrapper[4803]: W0127 21:49:51.050993 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1a768c9_8a8e_412a_a377_6812b5aca206.slice/crio-549812ee18e00ccd84610ff59d713d5382c9752b862498d10d0e82ad770505bd WatchSource:0}: Error finding container 549812ee18e00ccd84610ff59d713d5382c9752b862498d10d0e82ad770505bd: Status 404 returned error can't find the container with id 549812ee18e00ccd84610ff59d713d5382c9752b862498d10d0e82ad770505bd Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.091136 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r"] Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.099909 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9"] Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.099973 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-szkgj"] Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.100630 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:51 crc kubenswrapper[4803]: E0127 21:49:51.100986 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:51.600970531 +0000 UTC m=+144.016992230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.115808 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n7mdf"] Jan 27 21:49:51 crc kubenswrapper[4803]: W0127 21:49:51.131988 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d7bade4_c73a_419d_9c33_c30b0b7260ca.slice/crio-1c4539ae91fb6961401a3c6e6e3ba7fe882be11df43c91db8002d152f9be3419 WatchSource:0}: Error finding container 1c4539ae91fb6961401a3c6e6e3ba7fe882be11df43c91db8002d152f9be3419: Status 404 returned error can't find the container with id 1c4539ae91fb6961401a3c6e6e3ba7fe882be11df43c91db8002d152f9be3419 Jan 27 21:49:51 crc kubenswrapper[4803]: W0127 21:49:51.160194 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f440e60_e9e3_43ef_93ca_9b27adeac069.slice/crio-37ac19e2fa8ff223c946a8a0509e50be2b22c9f555541382bfd662ef31baf368 WatchSource:0}: Error finding container 37ac19e2fa8ff223c946a8a0509e50be2b22c9f555541382bfd662ef31baf368: Status 404 returned error can't find the container with id 37ac19e2fa8ff223c946a8a0509e50be2b22c9f555541382bfd662ef31baf368 Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.204746 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:51 crc kubenswrapper[4803]: E0127 21:49:51.205085 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:51.705061422 +0000 UTC m=+144.121083111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.227295 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mgtlh" podStartSLOduration=123.227273886 podStartE2EDuration="2m3.227273886s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.213935969 +0000 UTC m=+143.629957668" watchObservedRunningTime="2026-01-27 21:49:51.227273886 +0000 UTC m=+143.643295585" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.227753 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7"] Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.240112 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g"] Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.241437 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f5476"] Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.249233 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" podStartSLOduration=123.249213491 podStartE2EDuration="2m3.249213491s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.248241725 +0000 UTC m=+143.664263424" watchObservedRunningTime="2026-01-27 21:49:51.249213491 +0000 UTC m=+143.665235190" Jan 27 21:49:51 crc kubenswrapper[4803]: W0127 21:49:51.261987 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b1a88b1_f5d6_4946_8dda_3defb18a63fd.slice/crio-4bab17bf9d621c971934faf6e6b5a78dc9e12f1fe724935322da0b2597721867 WatchSource:0}: Error finding container 4bab17bf9d621c971934faf6e6b5a78dc9e12f1fe724935322da0b2597721867: Status 404 returned error can't find the container with id 4bab17bf9d621c971934faf6e6b5a78dc9e12f1fe724935322da0b2597721867 Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.272277 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 21:49:51 crc kubenswrapper[4803]: [-]has-synced failed: reason withheld Jan 27 21:49:51 crc kubenswrapper[4803]: [+]process-running ok Jan 27 21:49:51 crc kubenswrapper[4803]: healthz check failed Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.272339 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.294494 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn"] Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.297565 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9drvm" podStartSLOduration=123.297547083 podStartE2EDuration="2m3.297547083s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.290384461 +0000 UTC m=+143.706406160" watchObservedRunningTime="2026-01-27 21:49:51.297547083 +0000 UTC m=+143.713568782" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.302592 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-npwr7"] Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.305823 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:51 crc kubenswrapper[4803]: E0127 21:49:51.306340 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:51.806321157 +0000 UTC m=+144.222342856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:51 crc kubenswrapper[4803]: W0127 21:49:51.306396 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3643de61_fe1e_4b5f_acef_ac477aa81f8a.slice/crio-0b0b5ca5c3ebe5aa17b74ea2f269d255f238865780417d828ec4b3e99c545fbe WatchSource:0}: Error finding container 0b0b5ca5c3ebe5aa17b74ea2f269d255f238865780417d828ec4b3e99c545fbe: Status 404 returned error can't find the container with id 0b0b5ca5c3ebe5aa17b74ea2f269d255f238865780417d828ec4b3e99c545fbe Jan 27 21:49:51 crc kubenswrapper[4803]: W0127 21:49:51.315708 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25eb3de0_78b3_4e89_a860_9f1778060c50.slice/crio-1303a78cc25ff9884888c07a307fa3edc4534caaf0fcf4bbd8a3d085ce6075e6 WatchSource:0}: Error finding container 1303a78cc25ff9884888c07a307fa3edc4534caaf0fcf4bbd8a3d085ce6075e6: Status 404 returned error can't find the container with id 1303a78cc25ff9884888c07a307fa3edc4534caaf0fcf4bbd8a3d085ce6075e6 Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.331429 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podStartSLOduration=123.331402237 podStartE2EDuration="2m3.331402237s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.330475201 +0000 UTC m=+143.746496900" watchObservedRunningTime="2026-01-27 21:49:51.331402237 +0000 UTC m=+143.747423936" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.408108 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:51 crc kubenswrapper[4803]: E0127 21:49:51.408465 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:51.908449244 +0000 UTC m=+144.324470943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.479091 4803 generic.go:334] "Generic (PLEG): container finished" podID="7a6eb50d-a8af-4e53-a129-aee15ae61037" containerID="e40a2e34cbc36e84e6e8105b09d93b9d6f80adc24d79efe0795f8249c33150c0" exitCode=0 Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.479147 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" event={"ID":"7a6eb50d-a8af-4e53-a129-aee15ae61037","Type":"ContainerDied","Data":"e40a2e34cbc36e84e6e8105b09d93b9d6f80adc24d79efe0795f8249c33150c0"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.489886 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-s9tzw" event={"ID":"b06a9990-b5a6-4198-b3da-22eb6df6692b","Type":"ContainerStarted","Data":"d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.494204 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" event={"ID":"02c1fd2d-3326-44dc-9353-1c19a701826c","Type":"ContainerStarted","Data":"5a8668b3e1de40416338790690e94f6cec5dd40200e4b6721bafb7881d397cd1"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.508903 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:51 crc kubenswrapper[4803]: E0127 21:49:51.509289 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:52.009274477 +0000 UTC m=+144.425296176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.514493 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" event={"ID":"61adce3e-cfdd-4a33-b64d-f49069ef6469","Type":"ContainerStarted","Data":"5367198217cf89b564a5d7acd73e27cda5aee9cbcb8a6cf53aa9e5a1f104c01b"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.514657 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.516175 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jh44p" event={"ID":"d2eb7aad-8e72-489c-a000-ef21c4d9589a","Type":"ContainerStarted","Data":"ebdd44d6fb6d48d1cc4392f5b11d0f63ecb61018cb9ede40249f374ced9ed6a7"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.519657 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" event={"ID":"c3887e56-f659-4a2f-ac29-e6841a2245da","Type":"ContainerStarted","Data":"6e94c7b076b792a717b1de9babc94161c14dfb3edc5179b261d0c36b4389a292"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.520043 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" event={"ID":"c3887e56-f659-4a2f-ac29-e6841a2245da","Type":"ContainerStarted","Data":"9a70accd52f7e22783e8a4027c61278840aa545416c4ca1a8f7d9afb0724efd2"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.524046 4803 patch_prober.go:28] interesting pod/console-operator-58897d9998-h9nvv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.524093 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.525692 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" event={"ID":"8f8b8ad1-f276-4546-afd2-49f338f38c92","Type":"ContainerStarted","Data":"9acc22af19da35d55623b7276ae9fb7cc66d521319e7465ba5a43273849c52e6"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.528434 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" event={"ID":"dc3f105d-fa65-4c69-b14e-aac96d07c7e9","Type":"ContainerStarted","Data":"15db5f2e25dec8b5dfc6a0ab6229e221a64752f966d0b82d01cc10c6fdded9d4"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.528556 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-s9tzw" podStartSLOduration=123.528546392 podStartE2EDuration="2m3.528546392s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.523439825 +0000 UTC m=+143.939461524" watchObservedRunningTime="2026-01-27 21:49:51.528546392 +0000 UTC m=+143.944568091" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.529670 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" event={"ID":"ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9","Type":"ContainerStarted","Data":"a7ad97be3dadc160fd9d1cbac24d334506d2e2f7adcadc2290bf6a4ba702e5b1"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.532748 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" event={"ID":"767d334b-3f70-4847-b45a-ccf0d7e2dc2b","Type":"ContainerStarted","Data":"feb93123fdc07e61b239c351f46ffeaa730a6aba9dab848ab0ad1892932af44d"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.532805 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.532817 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" event={"ID":"767d334b-3f70-4847-b45a-ccf0d7e2dc2b","Type":"ContainerStarted","Data":"4d60088892396b1e48997c932d39df4f203d58f6404c51f0e83eaeb31df8c9d4"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.534401 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.534450 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.535136 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" event={"ID":"d1a768c9-8a8e-412a-a377-6812b5aca206","Type":"ContainerStarted","Data":"549812ee18e00ccd84610ff59d713d5382c9752b862498d10d0e82ad770505bd"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.544271 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" podStartSLOduration=123.544251082 podStartE2EDuration="2m3.544251082s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.542392962 +0000 UTC m=+143.958414671" watchObservedRunningTime="2026-01-27 21:49:51.544251082 +0000 UTC m=+143.960272781" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.560385 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podStartSLOduration=123.560365571 podStartE2EDuration="2m3.560365571s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.557182977 +0000 UTC m=+143.973204676" watchObservedRunningTime="2026-01-27 21:49:51.560365571 +0000 UTC m=+143.976387270" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.569625 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" event={"ID":"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0","Type":"ContainerStarted","Data":"418b702a475d92a9844e05f95416fd1d9d44549b14290eaac5c39d96664264df"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.574362 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" podStartSLOduration=123.574339685 podStartE2EDuration="2m3.574339685s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.570478832 +0000 UTC m=+143.986500531" watchObservedRunningTime="2026-01-27 21:49:51.574339685 +0000 UTC m=+143.990361384" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.580059 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-npwr7" event={"ID":"827ee45d-1ade-46af-95fe-ab0e673f6dc1","Type":"ContainerStarted","Data":"7fe917f57220cfe5ba36c4039cd38121acc78535ce22e328255667ba01d462a6"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.583488 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" event={"ID":"ae7575d2-5f8d-44a1-90fb-653fe276f273","Type":"ContainerStarted","Data":"d5fb3cf2d0eda01c567f74386ba5b9dae6c0a4ed9ab5791928b4579d6aa24210"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.583537 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" event={"ID":"ae7575d2-5f8d-44a1-90fb-653fe276f273","Type":"ContainerStarted","Data":"778fdbcef138ae30c91788a5bf1da6c07547184979df6f7b0cc5612eb7d70e57"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.588537 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9" event={"ID":"8f440e60-e9e3-43ef-93ca-9b27adeac069","Type":"ContainerStarted","Data":"37ac19e2fa8ff223c946a8a0509e50be2b22c9f555541382bfd662ef31baf368"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.610490 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:51 crc kubenswrapper[4803]: E0127 21:49:51.615953 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:52.115940216 +0000 UTC m=+144.531961915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.640807 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f64jt" podStartSLOduration=123.64078839 podStartE2EDuration="2m3.64078839s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.6385614 +0000 UTC m=+144.054583099" watchObservedRunningTime="2026-01-27 21:49:51.64078839 +0000 UTC m=+144.056810089" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.669336 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podStartSLOduration=123.669316832 podStartE2EDuration="2m3.669316832s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.667340189 +0000 UTC m=+144.083361898" watchObservedRunningTime="2026-01-27 21:49:51.669316832 +0000 UTC m=+144.085338531" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.670530 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" event={"ID":"70091f5f-e06c-4cf3-8bc8-299f10207363","Type":"ContainerStarted","Data":"c8339b8df1bc0afb36378438618a109239883f21ca96f3143202bfd9bfc32a13"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.675047 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.677026 4803 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7x4wr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.677078 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.689411 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xhhs6" event={"ID":"04162a6e-b772-45d4-9ec4-894e70fc95a2","Type":"ContainerStarted","Data":"4cb46c69f022d4b97802b0118e50ed09ba8a43742e6bb83b9b0cff34f90c7542"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.706636 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" event={"ID":"066a7a5b-c610-4b2e-a2f6-2c90b997fbc9","Type":"ContainerStarted","Data":"40736bab2c72ff76edd045add9455dab02c79e74c4932869cf9736abeea3fcaf"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.711331 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:51 crc kubenswrapper[4803]: E0127 21:49:51.712790 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:52.212766093 +0000 UTC m=+144.628787842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.712927 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n24nl" podStartSLOduration=123.712801633 podStartE2EDuration="2m3.712801633s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.710830661 +0000 UTC m=+144.126852360" watchObservedRunningTime="2026-01-27 21:49:51.712801633 +0000 UTC m=+144.128823332" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.714588 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" event={"ID":"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6","Type":"ContainerStarted","Data":"6f5f6fac5c801bc3a3a53cce68a6e7540e4368954867442ecf96df6c74334241"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.714721 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" event={"ID":"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6","Type":"ContainerStarted","Data":"ad79a935a9513a4d596eace20f298df6dc6e818e6ada6f7b44defead3c1c7ab9"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.719575 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.737293 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.737362 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.738880 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" event={"ID":"5380cb77-bf7a-4cc1-b12b-7159748430eb","Type":"ContainerStarted","Data":"096a32f91ad39d609369762cd469fe4c1a813d1e7bcccbb76db2524cc1fcf707"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.765634 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" event={"ID":"6e8228ba-8397-4400-b30f-07dcf24d6fb5","Type":"ContainerStarted","Data":"4ea3024473ebf837bfa276f58a52d56561b96cdd8b718d8601b802a8609a2795"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.767173 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.769273 4803 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-74666 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.769312 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" podUID="6e8228ba-8397-4400-b30f-07dcf24d6fb5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.777585 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" event={"ID":"e62d282c-a35b-42d6-a490-e11c0239b6c3","Type":"ContainerStarted","Data":"404d44888d4c016444e235b390b215be1207a8b93644f11ed339d4211165d348"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.777628 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" event={"ID":"e62d282c-a35b-42d6-a490-e11c0239b6c3","Type":"ContainerStarted","Data":"4e9404d7c65531ec9ef26a81591457538ac1ad576196d9d9dbad57864984900e"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.778750 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" podStartSLOduration=123.778731484 podStartE2EDuration="2m3.778731484s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.776045692 +0000 UTC m=+144.192067391" watchObservedRunningTime="2026-01-27 21:49:51.778731484 +0000 UTC m=+144.194753193" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.781637 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f5476" event={"ID":"3643de61-fe1e-4b5f-acef-ac477aa81f8a","Type":"ContainerStarted","Data":"0b0b5ca5c3ebe5aa17b74ea2f269d255f238865780417d828ec4b3e99c545fbe"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.785787 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" event={"ID":"25eb3de0-78b3-4e89-a860-9f1778060c50","Type":"ContainerStarted","Data":"1303a78cc25ff9884888c07a307fa3edc4534caaf0fcf4bbd8a3d085ce6075e6"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.790736 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" event={"ID":"f15cefaf-aacf-45a8-a2d5-8874dcf893b1","Type":"ContainerStarted","Data":"4c89c43c810231ee925288be224f71173a7b4a5a69b66146643dad2e83f12f72"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.808249 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" event={"ID":"8d7bade4-c73a-419d-9c33-c30b0b7260ca","Type":"ContainerStarted","Data":"1c4539ae91fb6961401a3c6e6e3ba7fe882be11df43c91db8002d152f9be3419"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.814255 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:51 crc kubenswrapper[4803]: E0127 21:49:51.823577 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:52.323561261 +0000 UTC m=+144.739582960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.824534 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k88zf" podStartSLOduration=123.824517537 podStartE2EDuration="2m3.824517537s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.822226666 +0000 UTC m=+144.238248355" watchObservedRunningTime="2026-01-27 21:49:51.824517537 +0000 UTC m=+144.240539236" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.825127 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xhhs6" podStartSLOduration=5.825086872 podStartE2EDuration="5.825086872s" podCreationTimestamp="2026-01-27 21:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.790365095 +0000 UTC m=+144.206386794" watchObservedRunningTime="2026-01-27 21:49:51.825086872 +0000 UTC m=+144.241108571" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.826971 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r" event={"ID":"511157a0-ff3f-4105-b425-81fe57ec64e0","Type":"ContainerStarted","Data":"5d7dbf9fbb0845a5ce390cd8d9deec6987ccf82ccdb3b4b8733139ec32677e58"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.847886 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" event={"ID":"a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7","Type":"ContainerStarted","Data":"d0718f6f25e861fa4d933c9330f6f1be0794b9a2e4fbdf00a3030ec8f946d223"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.848369 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" event={"ID":"a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7","Type":"ContainerStarted","Data":"cf499b924a6de62c294c577af42d0ce7a8ac6d7c8aeeb196f62234b474c403f4"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.881934 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podStartSLOduration=123.88191207 podStartE2EDuration="2m3.88191207s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.848686692 +0000 UTC m=+144.264708391" watchObservedRunningTime="2026-01-27 21:49:51.88191207 +0000 UTC m=+144.297933769" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.885020 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" event={"ID":"8e01cae6-c0f6-4f51-ba69-6a162470b81c","Type":"ContainerStarted","Data":"eb67c3df5215fabf0e633e3244ae233ad97855490ac65e42854b434e14de2c22"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.885087 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" event={"ID":"8e01cae6-c0f6-4f51-ba69-6a162470b81c","Type":"ContainerStarted","Data":"4628da3346168f0d319a70b19298cf26bd86b40b672c83072ba415f7521255bc"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.892983 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" podStartSLOduration=123.892959765 podStartE2EDuration="2m3.892959765s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.875008986 +0000 UTC m=+144.291030705" watchObservedRunningTime="2026-01-27 21:49:51.892959765 +0000 UTC m=+144.308981474" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.919571 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pslb5" podStartSLOduration=123.919531545 podStartE2EDuration="2m3.919531545s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.912648951 +0000 UTC m=+144.328670660" watchObservedRunningTime="2026-01-27 21:49:51.919531545 +0000 UTC m=+144.335553244" Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.928670 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:51 crc kubenswrapper[4803]: E0127 21:49:51.930766 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:52.430741755 +0000 UTC m=+144.846763454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.966091 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" event={"ID":"e2308949-6865-4d3b-ad3b-1de5c42149b8","Type":"ContainerStarted","Data":"c0759982548a4d1ed5da19a4e9e0d8c6b9f8a7ab84f3935778aee5ac8e6e5aee"} Jan 27 21:49:51 crc kubenswrapper[4803]: I0127 21:49:51.966125 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" event={"ID":"e2308949-6865-4d3b-ad3b-1de5c42149b8","Type":"ContainerStarted","Data":"56e71f451f4cd19106babe6327133b3bdf6a57bad4577b028ef85a8d8f232d08"} Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:51.996409 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" podStartSLOduration=123.996391258 podStartE2EDuration="2m3.996391258s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.945030776 +0000 UTC m=+144.361052485" watchObservedRunningTime="2026-01-27 21:49:51.996391258 +0000 UTC m=+144.412412957" Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.000104 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" event={"ID":"f0612fd3-e6b4-43b1-8e66-d0bf17281248","Type":"ContainerStarted","Data":"286cb5d10b222ddcf220923c8c9180b81a65b18f2df8be1c7ece6cdc92a2b043"} Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.015580 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" event={"ID":"1b1a88b1-f5d6-4946-8dda-3defb18a63fd","Type":"ContainerStarted","Data":"4bab17bf9d621c971934faf6e6b5a78dc9e12f1fe724935322da0b2597721867"} Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.030806 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.032020 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:52.532003079 +0000 UTC m=+144.948024778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.045555 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" event={"ID":"841660c5-b152-467b-97d4-38b9a181d315","Type":"ContainerStarted","Data":"6e04f9f18511ba11379e75c7347b330dc13a9a2769291a3002d012fb5d1f140a"} Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.046824 4803 patch_prober.go:28] interesting pod/downloads-7954f5f757-9drvm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.046898 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9drvm" podUID="1bc7c7ba-cad8-4f64-836e-a564b254e1fd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.067520 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.069283 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-8lpmj" podStartSLOduration=124.069264054 podStartE2EDuration="2m4.069264054s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:51.992891284 +0000 UTC m=+144.408913013" watchObservedRunningTime="2026-01-27 21:49:52.069264054 +0000 UTC m=+144.485285753" Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.069378 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bgfw4" podStartSLOduration=124.069374417 podStartE2EDuration="2m4.069374417s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:52.06798678 +0000 UTC m=+144.484008479" watchObservedRunningTime="2026-01-27 21:49:52.069374417 +0000 UTC m=+144.485396106" Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.136360 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.136759 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:52.636736746 +0000 UTC m=+145.052758445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.237910 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.239907 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:52.739890741 +0000 UTC m=+145.155912540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.275772 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 21:49:52 crc kubenswrapper[4803]: [-]has-synced failed: reason withheld Jan 27 21:49:52 crc kubenswrapper[4803]: [+]process-running ok Jan 27 21:49:52 crc kubenswrapper[4803]: healthz check failed Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.276233 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.341427 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.341587 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:52.841568027 +0000 UTC m=+145.257589726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.341699 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.341947 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:52.841937866 +0000 UTC m=+145.257959565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.442920 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.443314 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:52.943298245 +0000 UTC m=+145.359319944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.544694 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.544993 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.04498132 +0000 UTC m=+145.461003019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.646154 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.646377 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.146344047 +0000 UTC m=+145.562365746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.646647 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.646985 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.146976614 +0000 UTC m=+145.562998303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.747270 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.747458 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.747699 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.747915 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.247824487 +0000 UTC m=+145.663846186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.748157 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.748521 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.248506206 +0000 UTC m=+145.664527905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.849019 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.849233 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.349206165 +0000 UTC m=+145.765227854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.849564 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.849904 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.349895754 +0000 UTC m=+145.765917443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:52 crc kubenswrapper[4803]: I0127 21:49:52.949793 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:52 crc kubenswrapper[4803]: E0127 21:49:52.950610 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.450595304 +0000 UTC m=+145.866617003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.051670 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:53 crc kubenswrapper[4803]: E0127 21:49:53.052095 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.552065224 +0000 UTC m=+145.968086923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.064566 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" event={"ID":"8d7bade4-c73a-419d-9c33-c30b0b7260ca","Type":"ContainerStarted","Data":"2e14f602614e7f10fbe7d58c20e36806b40cbbc12c5df46f3588d3127188d781"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.064631 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" event={"ID":"8d7bade4-c73a-419d-9c33-c30b0b7260ca","Type":"ContainerStarted","Data":"a7dbbb4b8d6fe10b78992bf98a103f679212f243d13168a2662ee41d575f11de"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.066534 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r" event={"ID":"511157a0-ff3f-4105-b425-81fe57ec64e0","Type":"ContainerStarted","Data":"fcad683a313e8db15a5253f5decbc3368fa68228a235998c2a372afa1576db0b"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.066601 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r" event={"ID":"511157a0-ff3f-4105-b425-81fe57ec64e0","Type":"ContainerStarted","Data":"c182e20076ce415bdbd31ea2a52f240109e87130c79e703f527720bfb0391292"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.070631 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" event={"ID":"6e8228ba-8397-4400-b30f-07dcf24d6fb5","Type":"ContainerStarted","Data":"7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.071345 4803 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-74666 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.071398 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" podUID="6e8228ba-8397-4400-b30f-07dcf24d6fb5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.072155 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" event={"ID":"f0612fd3-e6b4-43b1-8e66-d0bf17281248","Type":"ContainerStarted","Data":"deabac102e5fbd19c931f4d2e356147c5ab89d20c7cafb0126b2820d3a31a580"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.075636 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" event={"ID":"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0","Type":"ContainerStarted","Data":"69e7c83be0df564cb9724449030dd860fee239fa3e3d4f482149da324626e2cc"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.075925 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.078524 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-npwr7" event={"ID":"827ee45d-1ade-46af-95fe-ab0e673f6dc1","Type":"ContainerStarted","Data":"dfdd8ff0eb6d35e7f2b965350e90a777fea84a30ab0b48949c27d9775db0c7bb"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.078546 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-n7mdf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.078611 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" podUID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.078568 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-npwr7" event={"ID":"827ee45d-1ade-46af-95fe-ab0e673f6dc1","Type":"ContainerStarted","Data":"fc2f0545c3db2bbf50211f040ee956a326ed70e6f386e1b759c860988eda672b"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.078707 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-npwr7" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.081421 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" event={"ID":"1b1a88b1-f5d6-4946-8dda-3defb18a63fd","Type":"ContainerStarted","Data":"a982d39d209bcba50cfc3f37316a9989962d6a6d07df8be8c609ade9667ccb41"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.084546 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f5476" event={"ID":"3643de61-fe1e-4b5f-acef-ac477aa81f8a","Type":"ContainerStarted","Data":"7dfb1d043362ef6e727e5eda0c64e220f9dba7135f8025afd9dd4186405766c4"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.094122 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" event={"ID":"25eb3de0-78b3-4e89-a860-9f1778060c50","Type":"ContainerStarted","Data":"a91190553c095a5a655cedcad893b5277d0342e2e628b51828b0ad56d7f737bc"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.094287 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.095717 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.095803 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.097948 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" event={"ID":"dc3f105d-fa65-4c69-b14e-aac96d07c7e9","Type":"ContainerStarted","Data":"ad9bedb4b2a967814717b0c63a94ab9d31b10a8d5f4dc8ee85afc7d8a08d5a01"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.103117 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" event={"ID":"066a7a5b-c610-4b2e-a2f6-2c90b997fbc9","Type":"ContainerStarted","Data":"85412d9cb1639c4c126842c202f8c14baf7700b77dcb478755f7187e98f0fec0"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.103179 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" event={"ID":"066a7a5b-c610-4b2e-a2f6-2c90b997fbc9","Type":"ContainerStarted","Data":"69ed9b2abeed84ef21761f369e284b75ac00dde9e094810cc3b58b80fbe17d68"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.105359 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2vbkh" event={"ID":"8e01cae6-c0f6-4f51-ba69-6a162470b81c","Type":"ContainerStarted","Data":"01929eb93c15fb26553d23d58719003acd2a86ed189c9fd379fea54c7184df94"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.107266 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" event={"ID":"5380cb77-bf7a-4cc1-b12b-7159748430eb","Type":"ContainerStarted","Data":"eabd45c380b24dc3c4836554ea4d942dbee3e5976ab5a97b5c16d7fdafefcb9f"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.107317 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" event={"ID":"5380cb77-bf7a-4cc1-b12b-7159748430eb","Type":"ContainerStarted","Data":"cebd01debd2482feaea8a556bb47bfee804ed2aa22e9e4240de7639fc4849694"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.108814 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" event={"ID":"d1a768c9-8a8e-412a-a377-6812b5aca206","Type":"ContainerStarted","Data":"165938c2becefc4822905457d00444bed96a34281fe3397ecb68a8148d21d902"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.110239 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" podStartSLOduration=125.110206017 podStartE2EDuration="2m5.110206017s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.110196177 +0000 UTC m=+145.526217876" watchObservedRunningTime="2026-01-27 21:49:53.110206017 +0000 UTC m=+145.526227716" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.111686 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlnvs" podStartSLOduration=125.111676406 podStartE2EDuration="2m5.111676406s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.080699818 +0000 UTC m=+145.496721527" watchObservedRunningTime="2026-01-27 21:49:53.111676406 +0000 UTC m=+145.527698105" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.115291 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" event={"ID":"a2e1adea-aee2-4ac6-b17a-6b8d6efa37a7","Type":"ContainerStarted","Data":"3548973a165007e1f4b1a6ef548989ecbab571e5ab2965419d928d23405605de"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.118576 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jh44p" event={"ID":"d2eb7aad-8e72-489c-a000-ef21c4d9589a","Type":"ContainerStarted","Data":"6f1ae4d3fd65d03908bf30aa39bd16b6c9e02ec672687a10e70576bdb95f97b6"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.122435 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9" event={"ID":"8f440e60-e9e3-43ef-93ca-9b27adeac069","Type":"ContainerStarted","Data":"6a52df3818437940e908decfa432284a7801b5df969052d950ff1f4dd939d04b"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.126554 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" event={"ID":"7a6eb50d-a8af-4e53-a129-aee15ae61037","Type":"ContainerStarted","Data":"15e33a3fa0a2aab3fd1bc4f925debc5ad979d9ee1018783b248ec4faecb70e9a"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.135171 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" event={"ID":"ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9","Type":"ContainerStarted","Data":"990059e329695155ccc7ee8c252f9851bb40f482108c08e5c41d86dfd124c808"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.135224 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.135238 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" event={"ID":"ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9","Type":"ContainerStarted","Data":"58a15a27300f06f740e47c68a1e500c1a005d4df46096eac4557735e34ae3de6"} Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.151962 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.152071 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.153913 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:53 crc kubenswrapper[4803]: E0127 21:49:53.167417 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.666827349 +0000 UTC m=+146.082849048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.189909 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-npwr7" podStartSLOduration=7.189890605 podStartE2EDuration="7.189890605s" podCreationTimestamp="2026-01-27 21:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.188364514 +0000 UTC m=+145.604386213" watchObservedRunningTime="2026-01-27 21:49:53.189890605 +0000 UTC m=+145.605912304" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.190808 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w264r" podStartSLOduration=125.190803229 podStartE2EDuration="2m5.190803229s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.133820067 +0000 UTC m=+145.549841786" watchObservedRunningTime="2026-01-27 21:49:53.190803229 +0000 UTC m=+145.606824928" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.199211 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.228447 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khmz4" podStartSLOduration=125.228431434 podStartE2EDuration="2m5.228431434s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.226894963 +0000 UTC m=+145.642916662" watchObservedRunningTime="2026-01-27 21:49:53.228431434 +0000 UTC m=+145.644453133" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.257789 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podStartSLOduration=125.257772398 podStartE2EDuration="2m5.257772398s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.255516408 +0000 UTC m=+145.671538117" watchObservedRunningTime="2026-01-27 21:49:53.257772398 +0000 UTC m=+145.673794097" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.265131 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:53 crc kubenswrapper[4803]: E0127 21:49:53.271128 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.771113304 +0000 UTC m=+146.187135003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.291080 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 21:49:53 crc kubenswrapper[4803]: [-]has-synced failed: reason withheld Jan 27 21:49:53 crc kubenswrapper[4803]: [+]process-running ok Jan 27 21:49:53 crc kubenswrapper[4803]: healthz check failed Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.291138 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.294405 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7lfg2" podStartSLOduration=125.294392846 podStartE2EDuration="2m5.294392846s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.291693543 +0000 UTC m=+145.707715232" watchObservedRunningTime="2026-01-27 21:49:53.294392846 +0000 UTC m=+145.710414545" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.371956 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:53 crc kubenswrapper[4803]: E0127 21:49:53.373103 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.873090437 +0000 UTC m=+146.289112136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.396302 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" podStartSLOduration=125.396284088 podStartE2EDuration="2m5.396284088s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.371371332 +0000 UTC m=+145.787393031" watchObservedRunningTime="2026-01-27 21:49:53.396284088 +0000 UTC m=+145.812305787" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.416827 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wpzf9" podStartSLOduration=125.416808216 podStartE2EDuration="2m5.416808216s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.414679458 +0000 UTC m=+145.830701167" watchObservedRunningTime="2026-01-27 21:49:53.416808216 +0000 UTC m=+145.832829915" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.418169 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qrccx" podStartSLOduration=125.418162072 podStartE2EDuration="2m5.418162072s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.396694498 +0000 UTC m=+145.812716197" watchObservedRunningTime="2026-01-27 21:49:53.418162072 +0000 UTC m=+145.834183781" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.441997 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-f5476" podStartSLOduration=7.441980048 podStartE2EDuration="7.441980048s" podCreationTimestamp="2026-01-27 21:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.440230561 +0000 UTC m=+145.856252260" watchObservedRunningTime="2026-01-27 21:49:53.441980048 +0000 UTC m=+145.858001747" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.474198 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:53 crc kubenswrapper[4803]: E0127 21:49:53.474635 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:53.97462166 +0000 UTC m=+146.390643359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.478372 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" podStartSLOduration=125.478350409 podStartE2EDuration="2m5.478350409s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.473821458 +0000 UTC m=+145.889843177" watchObservedRunningTime="2026-01-27 21:49:53.478350409 +0000 UTC m=+145.894372108" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.495829 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-szkgj" podStartSLOduration=125.495803265 podStartE2EDuration="2m5.495803265s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.495329643 +0000 UTC m=+145.911351352" watchObservedRunningTime="2026-01-27 21:49:53.495803265 +0000 UTC m=+145.911824964" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.574715 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dqrt7" podStartSLOduration=125.574692802 podStartE2EDuration="2m5.574692802s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.543089958 +0000 UTC m=+145.959111657" watchObservedRunningTime="2026-01-27 21:49:53.574692802 +0000 UTC m=+145.990714501" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.575278 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.575482 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" podStartSLOduration=125.575477024 podStartE2EDuration="2m5.575477024s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.573553982 +0000 UTC m=+145.989575701" watchObservedRunningTime="2026-01-27 21:49:53.575477024 +0000 UTC m=+145.991498723" Jan 27 21:49:53 crc kubenswrapper[4803]: E0127 21:49:53.575738 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.07571878 +0000 UTC m=+146.491740479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.611527 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-drp7p" podStartSLOduration=125.611508426 podStartE2EDuration="2m5.611508426s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:53.609012969 +0000 UTC m=+146.025034678" watchObservedRunningTime="2026-01-27 21:49:53.611508426 +0000 UTC m=+146.027530125" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.631231 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.676813 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:53 crc kubenswrapper[4803]: E0127 21:49:53.677235 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.177219901 +0000 UTC m=+146.593241600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.688943 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.689696 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.695966 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.696178 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.710148 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.778719 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.778963 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45af4a53-d99f-4090-8920-76f0d599708c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45af4a53-d99f-4090-8920-76f0d599708c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.779089 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45af4a53-d99f-4090-8920-76f0d599708c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45af4a53-d99f-4090-8920-76f0d599708c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 21:49:53 crc kubenswrapper[4803]: E0127 21:49:53.779239 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.279221455 +0000 UTC m=+146.695243154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.855950 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.856385 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.868966 4803 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-th8dv container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.33:8443/livez\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.869016 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" podUID="7a6eb50d-a8af-4e53-a129-aee15ae61037" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.33:8443/livez\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.881440 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.881509 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45af4a53-d99f-4090-8920-76f0d599708c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45af4a53-d99f-4090-8920-76f0d599708c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.881555 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45af4a53-d99f-4090-8920-76f0d599708c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45af4a53-d99f-4090-8920-76f0d599708c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 21:49:53 crc kubenswrapper[4803]: E0127 21:49:53.882291 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.382281038 +0000 UTC m=+146.798302737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.882328 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45af4a53-d99f-4090-8920-76f0d599708c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45af4a53-d99f-4090-8920-76f0d599708c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.925762 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45af4a53-d99f-4090-8920-76f0d599708c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45af4a53-d99f-4090-8920-76f0d599708c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.982486 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:53 crc kubenswrapper[4803]: E0127 21:49:53.982669 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.482643988 +0000 UTC m=+146.898665687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:53 crc kubenswrapper[4803]: I0127 21:49:53.982797 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:53 crc kubenswrapper[4803]: E0127 21:49:53.983105 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.48309354 +0000 UTC m=+146.899115239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.019976 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.084208 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:54 crc kubenswrapper[4803]: E0127 21:49:54.084670 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.584653852 +0000 UTC m=+147.000675551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.113824 4803 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7p5kq container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 21:49:54 crc kubenswrapper[4803]: [+]log ok Jan 27 21:49:54 crc kubenswrapper[4803]: [+]etcd ok Jan 27 21:49:54 crc kubenswrapper[4803]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 21:49:54 crc kubenswrapper[4803]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 21:49:54 crc kubenswrapper[4803]: [+]poststarthook/max-in-flight-filter ok Jan 27 21:49:54 crc kubenswrapper[4803]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 21:49:54 crc kubenswrapper[4803]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 21:49:54 crc kubenswrapper[4803]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 21:49:54 crc kubenswrapper[4803]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 27 21:49:54 crc kubenswrapper[4803]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 21:49:54 crc kubenswrapper[4803]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 21:49:54 crc kubenswrapper[4803]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 27 21:49:54 crc kubenswrapper[4803]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 21:49:54 crc kubenswrapper[4803]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 21:49:54 crc kubenswrapper[4803]: livez check failed Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.113898 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" podUID="02c1fd2d-3326-44dc-9353-1c19a701826c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.135612 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.135668 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.141284 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-n7mdf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.141327 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" podUID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.162508 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.163398 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.186135 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:54 crc kubenswrapper[4803]: E0127 21:49:54.186449 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.686437772 +0000 UTC m=+147.102459471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.279622 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 21:49:54 crc kubenswrapper[4803]: [-]has-synced failed: reason withheld Jan 27 21:49:54 crc kubenswrapper[4803]: [+]process-running ok Jan 27 21:49:54 crc kubenswrapper[4803]: healthz check failed Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.279818 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.289352 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:54 crc kubenswrapper[4803]: E0127 21:49:54.294972 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.79495341 +0000 UTC m=+147.210975109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.391906 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:54 crc kubenswrapper[4803]: E0127 21:49:54.410940 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.910924597 +0000 UTC m=+147.326946296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.461477 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 21:49:54 crc kubenswrapper[4803]: W0127 21:49:54.493562 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod45af4a53_d99f_4090_8920_76f0d599708c.slice/crio-42127d87387efad8ff6bb3f2c752308a529ba82257a9aeb1bfcf07b451066fa1 WatchSource:0}: Error finding container 42127d87387efad8ff6bb3f2c752308a529ba82257a9aeb1bfcf07b451066fa1: Status 404 returned error can't find the container with id 42127d87387efad8ff6bb3f2c752308a529ba82257a9aeb1bfcf07b451066fa1 Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.495047 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:54 crc kubenswrapper[4803]: E0127 21:49:54.495525 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.995498056 +0000 UTC m=+147.411519765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.495650 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:54 crc kubenswrapper[4803]: E0127 21:49:54.496070 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:54.996060331 +0000 UTC m=+147.412082030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.596115 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:54 crc kubenswrapper[4803]: E0127 21:49:54.596552 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.096475253 +0000 UTC m=+147.512496952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.698673 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:54 crc kubenswrapper[4803]: E0127 21:49:54.699106 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.199089614 +0000 UTC m=+147.615111313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.799931 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:54 crc kubenswrapper[4803]: E0127 21:49:54.800643 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.300629066 +0000 UTC m=+147.716650765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.902459 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:54 crc kubenswrapper[4803]: E0127 21:49:54.902892 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.402874177 +0000 UTC m=+147.818895876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.953194 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m24l6"] Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.954450 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.969609 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 21:49:54 crc kubenswrapper[4803]: I0127 21:49:54.973109 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m24l6"] Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.004422 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.004594 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-catalog-content\") pod \"certified-operators-m24l6\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:49:55 crc kubenswrapper[4803]: E0127 21:49:55.004622 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.504605434 +0000 UTC m=+147.920627133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.004654 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wt7\" (UniqueName: \"kubernetes.io/projected/6a2e67f5-2414-4850-a255-53737799d98b-kube-api-access-c4wt7\") pod \"certified-operators-m24l6\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.004676 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-utilities\") pod \"certified-operators-m24l6\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.015550 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.101875 4803 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.105549 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4wt7\" (UniqueName: \"kubernetes.io/projected/6a2e67f5-2414-4850-a255-53737799d98b-kube-api-access-c4wt7\") pod \"certified-operators-m24l6\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.105579 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-utilities\") pod \"certified-operators-m24l6\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.105601 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.105640 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.105658 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.105693 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.105713 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-catalog-content\") pod \"certified-operators-m24l6\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.106078 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-catalog-content\") pod \"certified-operators-m24l6\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.106935 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:55 crc kubenswrapper[4803]: E0127 21:49:55.112927 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.612908157 +0000 UTC m=+148.028929936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.113583 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-utilities\") pod \"certified-operators-m24l6\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.113964 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.119456 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.157249 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pmd2q"] Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.158208 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.165555 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4wt7\" (UniqueName: \"kubernetes.io/projected/6a2e67f5-2414-4850-a255-53737799d98b-kube-api-access-c4wt7\") pod \"certified-operators-m24l6\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.166195 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.179225 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"45af4a53-d99f-4090-8920-76f0d599708c","Type":"ContainerStarted","Data":"ecb5e4b15e1be45827aa3850481decd3dffcfa5c74b0cef6a2a1ffad88012611"} Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.179444 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"45af4a53-d99f-4090-8920-76f0d599708c","Type":"ContainerStarted","Data":"42127d87387efad8ff6bb3f2c752308a529ba82257a9aeb1bfcf07b451066fa1"} Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.186184 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pmd2q"] Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.189559 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jh44p" event={"ID":"d2eb7aad-8e72-489c-a000-ef21c4d9589a","Type":"ContainerStarted","Data":"2fc0945a3021836bd959359532405caf3d088c82e3b9e75ba878ecc90b82d0f3"} Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.189596 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jh44p" event={"ID":"d2eb7aad-8e72-489c-a000-ef21c4d9589a","Type":"ContainerStarted","Data":"8d459dff2d8551fb15e919a6ac5d9970b5dea02dd42dc2001797288a10598d7d"} Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.206398 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.206749 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.218442 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:55 crc kubenswrapper[4803]: E0127 21:49:55.218692 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.718676761 +0000 UTC m=+148.134698460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.228445 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.244262 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.258142 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.277836 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 21:49:55 crc kubenswrapper[4803]: [-]has-synced failed: reason withheld Jan 27 21:49:55 crc kubenswrapper[4803]: [+]process-running ok Jan 27 21:49:55 crc kubenswrapper[4803]: healthz check failed Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.277940 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.288902 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.308440 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.308595 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-utilities\") pod \"community-operators-pmd2q\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.308640 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vsrk\" (UniqueName: \"kubernetes.io/projected/f63e0833-14f7-4d43-805c-a5a05c2fdf02-kube-api-access-8vsrk\") pod \"community-operators-pmd2q\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.308666 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-catalog-content\") pod \"community-operators-pmd2q\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:49:55 crc kubenswrapper[4803]: E0127 21:49:55.309850 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.809832166 +0000 UTC m=+148.225853865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.345437 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.345421097 podStartE2EDuration="2.345421097s" podCreationTimestamp="2026-01-27 21:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:55.241031318 +0000 UTC m=+147.657053017" watchObservedRunningTime="2026-01-27 21:49:55.345421097 +0000 UTC m=+147.761442796" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.348479 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fjdxb"] Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.349395 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.361536 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fjdxb"] Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.410734 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.411424 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-utilities\") pod \"community-operators-pmd2q\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.411520 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vsrk\" (UniqueName: \"kubernetes.io/projected/f63e0833-14f7-4d43-805c-a5a05c2fdf02-kube-api-access-8vsrk\") pod \"community-operators-pmd2q\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.411557 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-catalog-content\") pod \"community-operators-pmd2q\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.412090 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-catalog-content\") pod \"community-operators-pmd2q\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.412333 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-utilities\") pod \"community-operators-pmd2q\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:49:55 crc kubenswrapper[4803]: E0127 21:49:55.412904 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 21:49:55.912889529 +0000 UTC m=+148.328911228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.439653 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vsrk\" (UniqueName: \"kubernetes.io/projected/f63e0833-14f7-4d43-805c-a5a05c2fdf02-kube-api-access-8vsrk\") pod \"community-operators-pmd2q\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.493930 4803 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T21:49:55.101902283Z","Handler":null,"Name":""} Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.509408 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.512876 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.512914 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-utilities\") pod \"certified-operators-fjdxb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.512939 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bvwz\" (UniqueName: \"kubernetes.io/projected/0d40bdc5-4adc-4650-934f-265f8614a1bb-kube-api-access-5bvwz\") pod \"certified-operators-fjdxb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.512960 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-catalog-content\") pod \"certified-operators-fjdxb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:49:55 crc kubenswrapper[4803]: E0127 21:49:55.513453 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 21:49:56.013435945 +0000 UTC m=+148.429457644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbljw" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.540099 4803 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.540139 4803 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.597948 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjx65"] Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.599005 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.618417 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.618575 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-utilities\") pod \"certified-operators-fjdxb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.618606 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bvwz\" (UniqueName: \"kubernetes.io/projected/0d40bdc5-4adc-4650-934f-265f8614a1bb-kube-api-access-5bvwz\") pod \"certified-operators-fjdxb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.618628 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-catalog-content\") pod \"certified-operators-fjdxb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.619066 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-catalog-content\") pod \"certified-operators-fjdxb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.620794 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjx65"] Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.621644 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-utilities\") pod \"certified-operators-fjdxb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.657310 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.691717 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bvwz\" (UniqueName: \"kubernetes.io/projected/0d40bdc5-4adc-4650-934f-265f8614a1bb-kube-api-access-5bvwz\") pod \"certified-operators-fjdxb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.724624 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-utilities\") pod \"community-operators-gjx65\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.724700 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hjl7\" (UniqueName: \"kubernetes.io/projected/68252b8f-1a1c-46c9-b037-743fd227e55a-kube-api-access-7hjl7\") pod \"community-operators-gjx65\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.724733 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.724771 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-catalog-content\") pod \"community-operators-gjx65\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.774013 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.774069 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.833650 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-utilities\") pod \"community-operators-gjx65\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.833696 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hjl7\" (UniqueName: \"kubernetes.io/projected/68252b8f-1a1c-46c9-b037-743fd227e55a-kube-api-access-7hjl7\") pod \"community-operators-gjx65\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.833745 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-catalog-content\") pod \"community-operators-gjx65\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.834188 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-catalog-content\") pod \"community-operators-gjx65\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.834420 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-utilities\") pod \"community-operators-gjx65\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.866621 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hjl7\" (UniqueName: \"kubernetes.io/projected/68252b8f-1a1c-46c9-b037-743fd227e55a-kube-api-access-7hjl7\") pod \"community-operators-gjx65\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:49:55 crc kubenswrapper[4803]: W0127 21:49:55.936656 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-603be281f127182df660c49bdaf4f4b86ab0df30d875ecd8bf14d578902cbf36 WatchSource:0}: Error finding container 603be281f127182df660c49bdaf4f4b86ab0df30d875ecd8bf14d578902cbf36: Status 404 returned error can't find the container with id 603be281f127182df660c49bdaf4f4b86ab0df30d875ecd8bf14d578902cbf36 Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.969378 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:49:55 crc kubenswrapper[4803]: I0127 21:49:55.984110 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.092398 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbljw\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.216125 4803 generic.go:334] "Generic (PLEG): container finished" podID="45af4a53-d99f-4090-8920-76f0d599708c" containerID="ecb5e4b15e1be45827aa3850481decd3dffcfa5c74b0cef6a2a1ffad88012611" exitCode=0 Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.216179 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"45af4a53-d99f-4090-8920-76f0d599708c","Type":"ContainerDied","Data":"ecb5e4b15e1be45827aa3850481decd3dffcfa5c74b0cef6a2a1ffad88012611"} Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.225559 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"603be281f127182df660c49bdaf4f4b86ab0df30d875ecd8bf14d578902cbf36"} Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.229164 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jh44p" event={"ID":"d2eb7aad-8e72-489c-a000-ef21c4d9589a","Type":"ContainerStarted","Data":"2c8237e94d5d2f3d55e26ea594cf4c8c316265952010c5cdbed20e4c716ba0b3"} Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.231206 4803 generic.go:334] "Generic (PLEG): container finished" podID="dc3f105d-fa65-4c69-b14e-aac96d07c7e9" containerID="ad9bedb4b2a967814717b0c63a94ab9d31b10a8d5f4dc8ee85afc7d8a08d5a01" exitCode=0 Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.231231 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" event={"ID":"dc3f105d-fa65-4c69-b14e-aac96d07c7e9","Type":"ContainerDied","Data":"ad9bedb4b2a967814717b0c63a94ab9d31b10a8d5f4dc8ee85afc7d8a08d5a01"} Jan 27 21:49:56 crc kubenswrapper[4803]: W0127 21:49:56.261759 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-216b6000677ed305c71d33e162e278c55a296ec731e311bddb0a0f35867d5e89 WatchSource:0}: Error finding container 216b6000677ed305c71d33e162e278c55a296ec731e311bddb0a0f35867d5e89: Status 404 returned error can't find the container with id 216b6000677ed305c71d33e162e278c55a296ec731e311bddb0a0f35867d5e89 Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.270567 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 21:49:56 crc kubenswrapper[4803]: [-]has-synced failed: reason withheld Jan 27 21:49:56 crc kubenswrapper[4803]: [+]process-running ok Jan 27 21:49:56 crc kubenswrapper[4803]: healthz check failed Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.270627 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.301025 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jh44p" podStartSLOduration=10.30099426 podStartE2EDuration="10.30099426s" podCreationTimestamp="2026-01-27 21:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:56.293389876 +0000 UTC m=+148.709411585" watchObservedRunningTime="2026-01-27 21:49:56.30099426 +0000 UTC m=+148.717015959" Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.326015 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.341309 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m24l6"] Jan 27 21:49:56 crc kubenswrapper[4803]: W0127 21:49:56.341630 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a2e67f5_2414_4850_a255_53737799d98b.slice/crio-a38f7ac3e855aa183ac4a18f5722a4a49596b852801095c90dd00bf339be1390 WatchSource:0}: Error finding container a38f7ac3e855aa183ac4a18f5722a4a49596b852801095c90dd00bf339be1390: Status 404 returned error can't find the container with id a38f7ac3e855aa183ac4a18f5722a4a49596b852801095c90dd00bf339be1390 Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.380819 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.490040 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pmd2q"] Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.510536 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fjdxb"] Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.637950 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjx65"] Jan 27 21:49:56 crc kubenswrapper[4803]: I0127 21:49:56.817500 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbljw"] Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.150539 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sbg9j"] Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.153115 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.156437 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.158784 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz96c\" (UniqueName: \"kubernetes.io/projected/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-kube-api-access-vz96c\") pod \"redhat-marketplace-sbg9j\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.158935 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-catalog-content\") pod \"redhat-marketplace-sbg9j\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.159052 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-utilities\") pod \"redhat-marketplace-sbg9j\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.161826 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbg9j"] Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.238748 4803 generic.go:334] "Generic (PLEG): container finished" podID="0d40bdc5-4adc-4650-934f-265f8614a1bb" containerID="c64a397b3924cb57667fd396c60f6903352c7f18de2ab1f2eb5066355c9f1479" exitCode=0 Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.238822 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjdxb" event={"ID":"0d40bdc5-4adc-4650-934f-265f8614a1bb","Type":"ContainerDied","Data":"c64a397b3924cb57667fd396c60f6903352c7f18de2ab1f2eb5066355c9f1479"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.238874 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjdxb" event={"ID":"0d40bdc5-4adc-4650-934f-265f8614a1bb","Type":"ContainerStarted","Data":"557e205a87dc377c68ec11d3e908667194130ba4dc412c994b1c71b1b4d1ad93"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.240422 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.240579 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e122b592b68af974c434c390cf7200abd92961e5f39efdbda9def4400cb83ea4"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.240626 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"216b6000677ed305c71d33e162e278c55a296ec731e311bddb0a0f35867d5e89"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.240834 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.243055 4803 generic.go:334] "Generic (PLEG): container finished" podID="6a2e67f5-2414-4850-a255-53737799d98b" containerID="28dbcf471c63ece8a022bc99db0b9e1548972c96bba85021871e0ada531febb2" exitCode=0 Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.243078 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24l6" event={"ID":"6a2e67f5-2414-4850-a255-53737799d98b","Type":"ContainerDied","Data":"28dbcf471c63ece8a022bc99db0b9e1548972c96bba85021871e0ada531febb2"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.243117 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24l6" event={"ID":"6a2e67f5-2414-4850-a255-53737799d98b","Type":"ContainerStarted","Data":"a38f7ac3e855aa183ac4a18f5722a4a49596b852801095c90dd00bf339be1390"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.245286 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"88dc2a3e3788c68475bc339f83716ca0056dc6e661508c18e7669edee0417f7a"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.247331 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" event={"ID":"a2e0fd9f-4917-4c1c-8b58-f952407e7e68","Type":"ContainerStarted","Data":"8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.247369 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" event={"ID":"a2e0fd9f-4917-4c1c-8b58-f952407e7e68","Type":"ContainerStarted","Data":"4fc7dce46dacff4e5657fc9fc1b685b68b946b78e060991ab304ba7d734dfdd6"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.247492 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.248742 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7d1b8ba91179c889697889e24a1478f06c873b2748443a85933385c91513ffa8"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.248784 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2256d15be5346644ec6e0441d820943d69284f03b681bb87b2940b9497296fc1"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.250939 4803 generic.go:334] "Generic (PLEG): container finished" podID="68252b8f-1a1c-46c9-b037-743fd227e55a" containerID="a4fc6b1a227c2d4ec40cff3c79bb5eb3d9e89a7be977be1fcc926104d24cdb1c" exitCode=0 Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.251012 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjx65" event={"ID":"68252b8f-1a1c-46c9-b037-743fd227e55a","Type":"ContainerDied","Data":"a4fc6b1a227c2d4ec40cff3c79bb5eb3d9e89a7be977be1fcc926104d24cdb1c"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.251047 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjx65" event={"ID":"68252b8f-1a1c-46c9-b037-743fd227e55a","Type":"ContainerStarted","Data":"eebf30b43121acf3e36c21db119cb54040b3d0afbcb2c53ad58b3ea8441b836d"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.253979 4803 generic.go:334] "Generic (PLEG): container finished" podID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" containerID="81a842133c5c513fdecf09ec6e939675b4ff2fd0101dbf2db3108f63104dfdd5" exitCode=0 Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.254079 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmd2q" event={"ID":"f63e0833-14f7-4d43-805c-a5a05c2fdf02","Type":"ContainerDied","Data":"81a842133c5c513fdecf09ec6e939675b4ff2fd0101dbf2db3108f63104dfdd5"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.254113 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmd2q" event={"ID":"f63e0833-14f7-4d43-805c-a5a05c2fdf02","Type":"ContainerStarted","Data":"a6a3911329c8bcfa5db3e7b2426681c642e4ec4f053ff6f1f2cffe5609cb7fe4"} Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.260771 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz96c\" (UniqueName: \"kubernetes.io/projected/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-kube-api-access-vz96c\") pod \"redhat-marketplace-sbg9j\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.261325 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-catalog-content\") pod \"redhat-marketplace-sbg9j\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.261644 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-utilities\") pod \"redhat-marketplace-sbg9j\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.261837 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-catalog-content\") pod \"redhat-marketplace-sbg9j\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.262090 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-utilities\") pod \"redhat-marketplace-sbg9j\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.278586 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 21:49:57 crc kubenswrapper[4803]: [-]has-synced failed: reason withheld Jan 27 21:49:57 crc kubenswrapper[4803]: [+]process-running ok Jan 27 21:49:57 crc kubenswrapper[4803]: healthz check failed Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.278632 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.295093 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz96c\" (UniqueName: \"kubernetes.io/projected/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-kube-api-access-vz96c\") pod \"redhat-marketplace-sbg9j\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.352156 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" podStartSLOduration=129.352141254 podStartE2EDuration="2m9.352141254s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:49:57.351014834 +0000 UTC m=+149.767036553" watchObservedRunningTime="2026-01-27 21:49:57.352141254 +0000 UTC m=+149.768162953" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.470305 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.553645 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cppwp"] Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.554607 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.612127 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cppwp"] Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.692901 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.709676 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.714742 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-utilities\") pod \"redhat-marketplace-cppwp\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.714798 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-catalog-content\") pod \"redhat-marketplace-cppwp\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.714822 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sllc\" (UniqueName: \"kubernetes.io/projected/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-kube-api-access-4sllc\") pod \"redhat-marketplace-cppwp\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.755892 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.760494 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7p5kq" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.817196 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45af4a53-d99f-4090-8920-76f0d599708c-kube-api-access\") pod \"45af4a53-d99f-4090-8920-76f0d599708c\" (UID: \"45af4a53-d99f-4090-8920-76f0d599708c\") " Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.817291 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8c7f\" (UniqueName: \"kubernetes.io/projected/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-kube-api-access-p8c7f\") pod \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.817318 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-secret-volume\") pod \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.817392 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-config-volume\") pod \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\" (UID: \"dc3f105d-fa65-4c69-b14e-aac96d07c7e9\") " Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.817419 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45af4a53-d99f-4090-8920-76f0d599708c-kubelet-dir\") pod \"45af4a53-d99f-4090-8920-76f0d599708c\" (UID: \"45af4a53-d99f-4090-8920-76f0d599708c\") " Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.817537 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-utilities\") pod \"redhat-marketplace-cppwp\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.817575 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-catalog-content\") pod \"redhat-marketplace-cppwp\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.817595 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sllc\" (UniqueName: \"kubernetes.io/projected/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-kube-api-access-4sllc\") pod \"redhat-marketplace-cppwp\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.823996 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-kube-api-access-p8c7f" (OuterVolumeSpecName: "kube-api-access-p8c7f") pod "dc3f105d-fa65-4c69-b14e-aac96d07c7e9" (UID: "dc3f105d-fa65-4c69-b14e-aac96d07c7e9"). InnerVolumeSpecName "kube-api-access-p8c7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.827030 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45af4a53-d99f-4090-8920-76f0d599708c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "45af4a53-d99f-4090-8920-76f0d599708c" (UID: "45af4a53-d99f-4090-8920-76f0d599708c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.827636 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-config-volume" (OuterVolumeSpecName: "config-volume") pod "dc3f105d-fa65-4c69-b14e-aac96d07c7e9" (UID: "dc3f105d-fa65-4c69-b14e-aac96d07c7e9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.827849 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-utilities\") pod \"redhat-marketplace-cppwp\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.827974 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-catalog-content\") pod \"redhat-marketplace-cppwp\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.827994 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dc3f105d-fa65-4c69-b14e-aac96d07c7e9" (UID: "dc3f105d-fa65-4c69-b14e-aac96d07c7e9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.831062 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45af4a53-d99f-4090-8920-76f0d599708c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "45af4a53-d99f-4090-8920-76f0d599708c" (UID: "45af4a53-d99f-4090-8920-76f0d599708c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.858796 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sllc\" (UniqueName: \"kubernetes.io/projected/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-kube-api-access-4sllc\") pod \"redhat-marketplace-cppwp\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.918916 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8c7f\" (UniqueName: \"kubernetes.io/projected/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-kube-api-access-p8c7f\") on node \"crc\" DevicePath \"\"" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.918952 4803 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.918962 4803 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc3f105d-fa65-4c69-b14e-aac96d07c7e9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.918971 4803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45af4a53-d99f-4090-8920-76f0d599708c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.918979 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45af4a53-d99f-4090-8920-76f0d599708c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 21:49:57 crc kubenswrapper[4803]: I0127 21:49:57.937194 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.004575 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbg9j"] Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.154650 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wrpjf"] Jan 27 21:49:58 crc kubenswrapper[4803]: E0127 21:49:58.154892 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45af4a53-d99f-4090-8920-76f0d599708c" containerName="pruner" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.154903 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="45af4a53-d99f-4090-8920-76f0d599708c" containerName="pruner" Jan 27 21:49:58 crc kubenswrapper[4803]: E0127 21:49:58.154914 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3f105d-fa65-4c69-b14e-aac96d07c7e9" containerName="collect-profiles" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.154920 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3f105d-fa65-4c69-b14e-aac96d07c7e9" containerName="collect-profiles" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.155003 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="45af4a53-d99f-4090-8920-76f0d599708c" containerName="pruner" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.155017 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3f105d-fa65-4c69-b14e-aac96d07c7e9" containerName="collect-profiles" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.155671 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.159193 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.169542 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wrpjf"] Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.269009 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" event={"ID":"dc3f105d-fa65-4c69-b14e-aac96d07c7e9","Type":"ContainerDied","Data":"15db5f2e25dec8b5dfc6a0ab6229e221a64752f966d0b82d01cc10c6fdded9d4"} Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.269042 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15db5f2e25dec8b5dfc6a0ab6229e221a64752f966d0b82d01cc10c6fdded9d4" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.269097 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.276165 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 21:49:58 crc kubenswrapper[4803]: [-]has-synced failed: reason withheld Jan 27 21:49:58 crc kubenswrapper[4803]: [+]process-running ok Jan 27 21:49:58 crc kubenswrapper[4803]: healthz check failed Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.276645 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.285396 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.285979 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"45af4a53-d99f-4090-8920-76f0d599708c","Type":"ContainerDied","Data":"42127d87387efad8ff6bb3f2c752308a529ba82257a9aeb1bfcf07b451066fa1"} Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.286018 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42127d87387efad8ff6bb3f2c752308a529ba82257a9aeb1bfcf07b451066fa1" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.291039 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbg9j" event={"ID":"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1","Type":"ContainerStarted","Data":"a06d2da1a78457d7ba76299907472d260b396c0f3e253553768e417dab343b70"} Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.291071 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbg9j" event={"ID":"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1","Type":"ContainerStarted","Data":"25518cdcfb367ea8f1ce8a57af4051c97104ad2a4c6d2ed9d607946363208620"} Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.296174 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cppwp"] Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.334317 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-catalog-content\") pod \"redhat-operators-wrpjf\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.334385 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-utilities\") pod \"redhat-operators-wrpjf\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.334492 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t7zb\" (UniqueName: \"kubernetes.io/projected/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-kube-api-access-4t7zb\") pod \"redhat-operators-wrpjf\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.435834 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t7zb\" (UniqueName: \"kubernetes.io/projected/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-kube-api-access-4t7zb\") pod \"redhat-operators-wrpjf\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.436083 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-catalog-content\") pod \"redhat-operators-wrpjf\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.436146 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-utilities\") pod \"redhat-operators-wrpjf\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.437423 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-utilities\") pod \"redhat-operators-wrpjf\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.438159 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-catalog-content\") pod \"redhat-operators-wrpjf\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.457733 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t7zb\" (UniqueName: \"kubernetes.io/projected/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-kube-api-access-4t7zb\") pod \"redhat-operators-wrpjf\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.496762 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.568097 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bmmdh"] Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.569432 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.591499 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bmmdh"] Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.746988 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-catalog-content\") pod \"redhat-operators-bmmdh\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.747348 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpflg\" (UniqueName: \"kubernetes.io/projected/5b352572-d580-4fb4-b60a-49db17322472-kube-api-access-wpflg\") pod \"redhat-operators-bmmdh\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.747409 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-utilities\") pod \"redhat-operators-bmmdh\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.799294 4803 patch_prober.go:28] interesting pod/downloads-7954f5f757-9drvm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.799353 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9drvm" podUID="1bc7c7ba-cad8-4f64-836e-a564b254e1fd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.800782 4803 patch_prober.go:28] interesting pod/downloads-7954f5f757-9drvm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.800838 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9drvm" podUID="1bc7c7ba-cad8-4f64-836e-a564b254e1fd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.842992 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wrpjf"] Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.848410 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-catalog-content\") pod \"redhat-operators-bmmdh\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.848464 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpflg\" (UniqueName: \"kubernetes.io/projected/5b352572-d580-4fb4-b60a-49db17322472-kube-api-access-wpflg\") pod \"redhat-operators-bmmdh\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.848492 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-utilities\") pod \"redhat-operators-bmmdh\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.850929 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-utilities\") pod \"redhat-operators-bmmdh\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.850959 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-catalog-content\") pod \"redhat-operators-bmmdh\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:49:58 crc kubenswrapper[4803]: W0127 21:49:58.857579 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod467bcdf9_e419_4ef2_84af_2cfedbfa28f2.slice/crio-19f55507a8a010e76c0973a3217d8f60c0d5d9130d33e31153e5c037bc470047 WatchSource:0}: Error finding container 19f55507a8a010e76c0973a3217d8f60c0d5d9130d33e31153e5c037bc470047: Status 404 returned error can't find the container with id 19f55507a8a010e76c0973a3217d8f60c0d5d9130d33e31153e5c037bc470047 Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.860779 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.866717 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.871122 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpflg\" (UniqueName: \"kubernetes.io/projected/5b352572-d580-4fb4-b60a-49db17322472-kube-api-access-wpflg\") pod \"redhat-operators-bmmdh\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.911457 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.911500 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.931966 4803 patch_prober.go:28] interesting pod/console-f9d7485db-s9tzw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.932019 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-s9tzw" podUID="b06a9990-b5a6-4198-b3da-22eb6df6692b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 21:49:58 crc kubenswrapper[4803]: I0127 21:49:58.932188 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.269373 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bmmdh"] Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.269798 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.275629 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.321387 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.328925 4803 generic.go:334] "Generic (PLEG): container finished" podID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerID="af1799e698d918ba85429c21fc20096fabca6be4b6f8a4ab56922b9090744f3d" exitCode=0 Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.329027 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cppwp" event={"ID":"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a","Type":"ContainerDied","Data":"af1799e698d918ba85429c21fc20096fabca6be4b6f8a4ab56922b9090744f3d"} Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.329077 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cppwp" event={"ID":"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a","Type":"ContainerStarted","Data":"566559198e450756c86974e4ec477fa99c349978845be2ad15f94e6a022fb8ff"} Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.349929 4803 generic.go:334] "Generic (PLEG): container finished" podID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" containerID="a06d2da1a78457d7ba76299907472d260b396c0f3e253553768e417dab343b70" exitCode=0 Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.350408 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbg9j" event={"ID":"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1","Type":"ContainerDied","Data":"a06d2da1a78457d7ba76299907472d260b396c0f3e253553768e417dab343b70"} Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.356787 4803 generic.go:334] "Generic (PLEG): container finished" podID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" containerID="b8d73f7507c01f01f374d273374f4296833792271ce126cd7e5e3f9078796ad8" exitCode=0 Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.357895 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrpjf" event={"ID":"467bcdf9-e419-4ef2-84af-2cfedbfa28f2","Type":"ContainerDied","Data":"b8d73f7507c01f01f374d273374f4296833792271ce126cd7e5e3f9078796ad8"} Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.357918 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrpjf" event={"ID":"467bcdf9-e419-4ef2-84af-2cfedbfa28f2","Type":"ContainerStarted","Data":"19f55507a8a010e76c0973a3217d8f60c0d5d9130d33e31153e5c037bc470047"} Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.371161 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmmdh" event={"ID":"5b352572-d580-4fb4-b60a-49db17322472","Type":"ContainerStarted","Data":"aba4572ff36784d3a76c91fe7b11445b71caf5afffddeb4c384c33e64aa4778f"} Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.383667 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.550692 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.556982 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.569598 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.569782 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.574238 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.574951 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.574982 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.649267 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.676366 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.676420 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.676504 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.703368 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 21:49:59 crc kubenswrapper[4803]: I0127 21:49:59.916744 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 21:50:00 crc kubenswrapper[4803]: I0127 21:50:00.393655 4803 generic.go:334] "Generic (PLEG): container finished" podID="5b352572-d580-4fb4-b60a-49db17322472" containerID="8f9ad76218981e627c854387ae1791a632ac49655477c48bcbc0361d4aea0ad1" exitCode=0 Jan 27 21:50:00 crc kubenswrapper[4803]: I0127 21:50:00.393760 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmmdh" event={"ID":"5b352572-d580-4fb4-b60a-49db17322472","Type":"ContainerDied","Data":"8f9ad76218981e627c854387ae1791a632ac49655477c48bcbc0361d4aea0ad1"} Jan 27 21:50:00 crc kubenswrapper[4803]: I0127 21:50:00.589023 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 21:50:01 crc kubenswrapper[4803]: I0127 21:50:01.404878 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0","Type":"ContainerStarted","Data":"0f4751cb1a9f98838a1628a654f68473b2696839f97207f248a074551c871b21"} Jan 27 21:50:02 crc kubenswrapper[4803]: I0127 21:50:02.421897 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0","Type":"ContainerStarted","Data":"e87c0acc895dbb4014f7437065938d64f0eb77d8d947a03f8e6508dec7d935ad"} Jan 27 21:50:03 crc kubenswrapper[4803]: I0127 21:50:03.447510 4803 generic.go:334] "Generic (PLEG): container finished" podID="23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0" containerID="e87c0acc895dbb4014f7437065938d64f0eb77d8d947a03f8e6508dec7d935ad" exitCode=0 Jan 27 21:50:03 crc kubenswrapper[4803]: I0127 21:50:03.447570 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0","Type":"ContainerDied","Data":"e87c0acc895dbb4014f7437065938d64f0eb77d8d947a03f8e6508dec7d935ad"} Jan 27 21:50:04 crc kubenswrapper[4803]: I0127 21:50:04.434713 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-npwr7" Jan 27 21:50:08 crc kubenswrapper[4803]: I0127 21:50:08.799249 4803 patch_prober.go:28] interesting pod/downloads-7954f5f757-9drvm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 21:50:08 crc kubenswrapper[4803]: I0127 21:50:08.799980 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9drvm" podUID="1bc7c7ba-cad8-4f64-836e-a564b254e1fd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 21:50:08 crc kubenswrapper[4803]: I0127 21:50:08.799301 4803 patch_prober.go:28] interesting pod/downloads-7954f5f757-9drvm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 21:50:08 crc kubenswrapper[4803]: I0127 21:50:08.800069 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9drvm" podUID="1bc7c7ba-cad8-4f64-836e-a564b254e1fd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 21:50:08 crc kubenswrapper[4803]: I0127 21:50:08.957120 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:50:08 crc kubenswrapper[4803]: I0127 21:50:08.962687 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:50:10 crc kubenswrapper[4803]: I0127 21:50:10.880992 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:50:10 crc kubenswrapper[4803]: I0127 21:50:10.887189 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d757da7-4079-4a7a-806d-560834fe95ae-metrics-certs\") pod \"network-metrics-daemon-72wq6\" (UID: \"0d757da7-4079-4a7a-806d-560834fe95ae\") " pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:50:10 crc kubenswrapper[4803]: I0127 21:50:10.920513 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-72wq6" Jan 27 21:50:12 crc kubenswrapper[4803]: I0127 21:50:12.532874 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 21:50:12 crc kubenswrapper[4803]: I0127 21:50:12.582952 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0","Type":"ContainerDied","Data":"0f4751cb1a9f98838a1628a654f68473b2696839f97207f248a074551c871b21"} Jan 27 21:50:12 crc kubenswrapper[4803]: I0127 21:50:12.583347 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f4751cb1a9f98838a1628a654f68473b2696839f97207f248a074551c871b21" Jan 27 21:50:12 crc kubenswrapper[4803]: I0127 21:50:12.583417 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 21:50:12 crc kubenswrapper[4803]: I0127 21:50:12.606600 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kubelet-dir\") pod \"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0\" (UID: \"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0\") " Jan 27 21:50:12 crc kubenswrapper[4803]: I0127 21:50:12.606658 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kube-api-access\") pod \"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0\" (UID: \"23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0\") " Jan 27 21:50:12 crc kubenswrapper[4803]: I0127 21:50:12.607724 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0" (UID: "23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:50:12 crc kubenswrapper[4803]: I0127 21:50:12.612957 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0" (UID: "23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:50:12 crc kubenswrapper[4803]: I0127 21:50:12.683537 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-72wq6"] Jan 27 21:50:12 crc kubenswrapper[4803]: I0127 21:50:12.707821 4803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 21:50:12 crc kubenswrapper[4803]: I0127 21:50:12.707891 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 21:50:16 crc kubenswrapper[4803]: I0127 21:50:16.343720 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:50:16 crc kubenswrapper[4803]: I0127 21:50:16.344103 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:50:16 crc kubenswrapper[4803]: I0127 21:50:16.386729 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:50:18 crc kubenswrapper[4803]: I0127 21:50:18.815277 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9drvm" Jan 27 21:50:20 crc kubenswrapper[4803]: I0127 21:50:20.624668 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-72wq6" event={"ID":"0d757da7-4079-4a7a-806d-560834fe95ae","Type":"ContainerStarted","Data":"f6b7955fc7c6267fab4515785e9a9b967703fae65b6d5bd9b938470551eab85d"} Jan 27 21:50:29 crc kubenswrapper[4803]: I0127 21:50:29.373163 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 21:50:30 crc kubenswrapper[4803]: E0127 21:50:30.803086 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 21:50:30 crc kubenswrapper[4803]: E0127 21:50:30.803601 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vsrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pmd2q_openshift-marketplace(f63e0833-14f7-4d43-805c-a5a05c2fdf02): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 21:50:30 crc kubenswrapper[4803]: E0127 21:50:30.804767 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pmd2q" podUID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" Jan 27 21:50:33 crc kubenswrapper[4803]: E0127 21:50:33.307202 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 21:50:33 crc kubenswrapper[4803]: E0127 21:50:33.307440 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hjl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gjx65_openshift-marketplace(68252b8f-1a1c-46c9-b037-743fd227e55a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 21:50:33 crc kubenswrapper[4803]: E0127 21:50:33.308628 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gjx65" podUID="68252b8f-1a1c-46c9-b037-743fd227e55a" Jan 27 21:50:33 crc kubenswrapper[4803]: E0127 21:50:33.438789 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pmd2q" podUID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" Jan 27 21:50:34 crc kubenswrapper[4803]: E0127 21:50:34.348781 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 21:50:34 crc kubenswrapper[4803]: E0127 21:50:34.348946 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vz96c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sbg9j_openshift-marketplace(e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 21:50:34 crc kubenswrapper[4803]: E0127 21:50:34.350114 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-sbg9j" podUID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" Jan 27 21:50:35 crc kubenswrapper[4803]: I0127 21:50:35.369335 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 21:50:37 crc kubenswrapper[4803]: I0127 21:50:37.937024 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 21:50:37 crc kubenswrapper[4803]: E0127 21:50:37.949882 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0" containerName="pruner" Jan 27 21:50:37 crc kubenswrapper[4803]: I0127 21:50:37.949960 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0" containerName="pruner" Jan 27 21:50:37 crc kubenswrapper[4803]: I0127 21:50:37.952428 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c9a0c9-2d59-4175-a06d-7e2d5e4d2ad0" containerName="pruner" Jan 27 21:50:37 crc kubenswrapper[4803]: I0127 21:50:37.954003 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 21:50:37 crc kubenswrapper[4803]: I0127 21:50:37.956773 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 21:50:37 crc kubenswrapper[4803]: I0127 21:50:37.961658 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 21:50:37 crc kubenswrapper[4803]: I0127 21:50:37.981169 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 21:50:38 crc kubenswrapper[4803]: I0127 21:50:38.101594 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0181537-cbf9-4620-8709-04a5fc5cf618-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0181537-cbf9-4620-8709-04a5fc5cf618\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 21:50:38 crc kubenswrapper[4803]: I0127 21:50:38.101646 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0181537-cbf9-4620-8709-04a5fc5cf618-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0181537-cbf9-4620-8709-04a5fc5cf618\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 21:50:38 crc kubenswrapper[4803]: I0127 21:50:38.203289 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0181537-cbf9-4620-8709-04a5fc5cf618-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0181537-cbf9-4620-8709-04a5fc5cf618\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 21:50:38 crc kubenswrapper[4803]: I0127 21:50:38.203712 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0181537-cbf9-4620-8709-04a5fc5cf618-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0181537-cbf9-4620-8709-04a5fc5cf618\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 21:50:38 crc kubenswrapper[4803]: I0127 21:50:38.203792 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0181537-cbf9-4620-8709-04a5fc5cf618-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0181537-cbf9-4620-8709-04a5fc5cf618\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 21:50:38 crc kubenswrapper[4803]: I0127 21:50:38.223830 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0181537-cbf9-4620-8709-04a5fc5cf618-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0181537-cbf9-4620-8709-04a5fc5cf618\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 21:50:38 crc kubenswrapper[4803]: I0127 21:50:38.285719 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 21:50:38 crc kubenswrapper[4803]: E0127 21:50:38.533338 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 21:50:38 crc kubenswrapper[4803]: E0127 21:50:38.533770 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sllc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cppwp_openshift-marketplace(0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 21:50:38 crc kubenswrapper[4803]: E0127 21:50:38.535746 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cppwp" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" Jan 27 21:50:40 crc kubenswrapper[4803]: E0127 21:50:40.999128 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gjx65" podUID="68252b8f-1a1c-46c9-b037-743fd227e55a" Jan 27 21:50:41 crc kubenswrapper[4803]: E0127 21:50:40.999497 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cppwp" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" Jan 27 21:50:41 crc kubenswrapper[4803]: E0127 21:50:40.999497 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sbg9j" podUID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" Jan 27 21:50:41 crc kubenswrapper[4803]: E0127 21:50:41.440247 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 21:50:41 crc kubenswrapper[4803]: E0127 21:50:41.440819 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t7zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wrpjf_openshift-marketplace(467bcdf9-e419-4ef2-84af-2cfedbfa28f2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 21:50:41 crc kubenswrapper[4803]: E0127 21:50:41.442383 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wrpjf" podUID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.140191 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.141217 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.152153 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.253131 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kube-api-access\") pod \"installer-9-crc\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.253187 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-var-lock\") pod \"installer-9-crc\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.253340 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.354748 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.354820 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kube-api-access\") pod \"installer-9-crc\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.354886 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-var-lock\") pod \"installer-9-crc\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.354963 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-var-lock\") pod \"installer-9-crc\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.354999 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.390475 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kube-api-access\") pod \"installer-9-crc\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:50:42 crc kubenswrapper[4803]: I0127 21:50:42.474904 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:50:42 crc kubenswrapper[4803]: E0127 21:50:42.756977 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wrpjf" podUID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" Jan 27 21:50:43 crc kubenswrapper[4803]: I0127 21:50:43.141778 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 21:50:43 crc kubenswrapper[4803]: W0127 21:50:43.151902 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd32a4347_7d5e_4c36_ab79_2815fa7b5fbf.slice/crio-36f39860c2015bfc6c5befcccb6934e4957470c583f99ed8e9d48e5e6d762c37 WatchSource:0}: Error finding container 36f39860c2015bfc6c5befcccb6934e4957470c583f99ed8e9d48e5e6d762c37: Status 404 returned error can't find the container with id 36f39860c2015bfc6c5befcccb6934e4957470c583f99ed8e9d48e5e6d762c37 Jan 27 21:50:43 crc kubenswrapper[4803]: I0127 21:50:43.180271 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 21:50:43 crc kubenswrapper[4803]: W0127 21:50:43.186934 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb0181537_cbf9_4620_8709_04a5fc5cf618.slice/crio-88a6bc1c7e832c279fe406c0d2c8e77561a1b52c485d689f33e94fd6ebb2f62b WatchSource:0}: Error finding container 88a6bc1c7e832c279fe406c0d2c8e77561a1b52c485d689f33e94fd6ebb2f62b: Status 404 returned error can't find the container with id 88a6bc1c7e832c279fe406c0d2c8e77561a1b52c485d689f33e94fd6ebb2f62b Jan 27 21:50:43 crc kubenswrapper[4803]: I0127 21:50:43.770335 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf","Type":"ContainerStarted","Data":"36f39860c2015bfc6c5befcccb6934e4957470c583f99ed8e9d48e5e6d762c37"} Jan 27 21:50:43 crc kubenswrapper[4803]: I0127 21:50:43.771321 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b0181537-cbf9-4620-8709-04a5fc5cf618","Type":"ContainerStarted","Data":"88a6bc1c7e832c279fe406c0d2c8e77561a1b52c485d689f33e94fd6ebb2f62b"} Jan 27 21:50:44 crc kubenswrapper[4803]: E0127 21:50:44.174437 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 21:50:44 crc kubenswrapper[4803]: E0127 21:50:44.174663 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4wt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m24l6_openshift-marketplace(6a2e67f5-2414-4850-a255-53737799d98b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 21:50:44 crc kubenswrapper[4803]: E0127 21:50:44.175910 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-m24l6" podUID="6a2e67f5-2414-4850-a255-53737799d98b" Jan 27 21:50:44 crc kubenswrapper[4803]: I0127 21:50:44.777758 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-72wq6" event={"ID":"0d757da7-4079-4a7a-806d-560834fe95ae","Type":"ContainerStarted","Data":"c06f8933218d5221beee44f7989ba40474f12ca18ad0885a55aa9379cef993ab"} Jan 27 21:50:44 crc kubenswrapper[4803]: E0127 21:50:44.779186 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m24l6" podUID="6a2e67f5-2414-4850-a255-53737799d98b" Jan 27 21:50:45 crc kubenswrapper[4803]: E0127 21:50:45.447518 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 21:50:45 crc kubenswrapper[4803]: E0127 21:50:45.447692 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bvwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fjdxb_openshift-marketplace(0d40bdc5-4adc-4650-934f-265f8614a1bb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 21:50:45 crc kubenswrapper[4803]: E0127 21:50:45.448984 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fjdxb" podUID="0d40bdc5-4adc-4650-934f-265f8614a1bb" Jan 27 21:50:45 crc kubenswrapper[4803]: E0127 21:50:45.667619 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 21:50:45 crc kubenswrapper[4803]: E0127 21:50:45.667765 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpflg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bmmdh_openshift-marketplace(5b352572-d580-4fb4-b60a-49db17322472): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 21:50:45 crc kubenswrapper[4803]: E0127 21:50:45.668945 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bmmdh" podUID="5b352572-d580-4fb4-b60a-49db17322472" Jan 27 21:50:45 crc kubenswrapper[4803]: I0127 21:50:45.787311 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf","Type":"ContainerStarted","Data":"008eff428752ba796936f41a3c6dc0a1670c26dd4abc07b36febd2283e20c101"} Jan 27 21:50:45 crc kubenswrapper[4803]: I0127 21:50:45.789040 4803 generic.go:334] "Generic (PLEG): container finished" podID="b0181537-cbf9-4620-8709-04a5fc5cf618" containerID="7f2b50647e275ef808f3ae2155eb6535639b31c2037de31aa01062a69a83d8ee" exitCode=0 Jan 27 21:50:45 crc kubenswrapper[4803]: I0127 21:50:45.789094 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b0181537-cbf9-4620-8709-04a5fc5cf618","Type":"ContainerDied","Data":"7f2b50647e275ef808f3ae2155eb6535639b31c2037de31aa01062a69a83d8ee"} Jan 27 21:50:45 crc kubenswrapper[4803]: I0127 21:50:45.792033 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-72wq6" event={"ID":"0d757da7-4079-4a7a-806d-560834fe95ae","Type":"ContainerStarted","Data":"c283bf495e37bdf87a96d2027d2bd722cd2ec40953e936eb9a1e23f7d213fa18"} Jan 27 21:50:45 crc kubenswrapper[4803]: E0127 21:50:45.793170 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bmmdh" podUID="5b352572-d580-4fb4-b60a-49db17322472" Jan 27 21:50:45 crc kubenswrapper[4803]: E0127 21:50:45.794514 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fjdxb" podUID="0d40bdc5-4adc-4650-934f-265f8614a1bb" Jan 27 21:50:45 crc kubenswrapper[4803]: I0127 21:50:45.806203 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.806180727 podStartE2EDuration="3.806180727s" podCreationTimestamp="2026-01-27 21:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:50:45.802324104 +0000 UTC m=+198.218345813" watchObservedRunningTime="2026-01-27 21:50:45.806180727 +0000 UTC m=+198.222202426" Jan 27 21:50:45 crc kubenswrapper[4803]: I0127 21:50:45.850095 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-72wq6" podStartSLOduration=177.850072731 podStartE2EDuration="2m57.850072731s" podCreationTimestamp="2026-01-27 21:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:50:45.848413146 +0000 UTC m=+198.264434845" watchObservedRunningTime="2026-01-27 21:50:45.850072731 +0000 UTC m=+198.266094440" Jan 27 21:50:46 crc kubenswrapper[4803]: I0127 21:50:46.343497 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:50:46 crc kubenswrapper[4803]: I0127 21:50:46.343567 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:50:47 crc kubenswrapper[4803]: I0127 21:50:47.053033 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 21:50:47 crc kubenswrapper[4803]: I0127 21:50:47.222063 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0181537-cbf9-4620-8709-04a5fc5cf618-kube-api-access\") pod \"b0181537-cbf9-4620-8709-04a5fc5cf618\" (UID: \"b0181537-cbf9-4620-8709-04a5fc5cf618\") " Jan 27 21:50:47 crc kubenswrapper[4803]: I0127 21:50:47.222198 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0181537-cbf9-4620-8709-04a5fc5cf618-kubelet-dir\") pod \"b0181537-cbf9-4620-8709-04a5fc5cf618\" (UID: \"b0181537-cbf9-4620-8709-04a5fc5cf618\") " Jan 27 21:50:47 crc kubenswrapper[4803]: I0127 21:50:47.222330 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0181537-cbf9-4620-8709-04a5fc5cf618-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0181537-cbf9-4620-8709-04a5fc5cf618" (UID: "b0181537-cbf9-4620-8709-04a5fc5cf618"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:50:47 crc kubenswrapper[4803]: I0127 21:50:47.222625 4803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0181537-cbf9-4620-8709-04a5fc5cf618-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 21:50:47 crc kubenswrapper[4803]: I0127 21:50:47.228733 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0181537-cbf9-4620-8709-04a5fc5cf618-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0181537-cbf9-4620-8709-04a5fc5cf618" (UID: "b0181537-cbf9-4620-8709-04a5fc5cf618"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:50:47 crc kubenswrapper[4803]: I0127 21:50:47.324100 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0181537-cbf9-4620-8709-04a5fc5cf618-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 21:50:47 crc kubenswrapper[4803]: I0127 21:50:47.802299 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b0181537-cbf9-4620-8709-04a5fc5cf618","Type":"ContainerDied","Data":"88a6bc1c7e832c279fe406c0d2c8e77561a1b52c485d689f33e94fd6ebb2f62b"} Jan 27 21:50:47 crc kubenswrapper[4803]: I0127 21:50:47.802359 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88a6bc1c7e832c279fe406c0d2c8e77561a1b52c485d689f33e94fd6ebb2f62b" Jan 27 21:50:47 crc kubenswrapper[4803]: I0127 21:50:47.802379 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 21:50:49 crc kubenswrapper[4803]: I0127 21:50:49.815453 4803 generic.go:334] "Generic (PLEG): container finished" podID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" containerID="0ae570f7b52d2a284c5bd60307d9619ec2e4a195d78f5231c72c91f6c9ebc389" exitCode=0 Jan 27 21:50:49 crc kubenswrapper[4803]: I0127 21:50:49.815522 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmd2q" event={"ID":"f63e0833-14f7-4d43-805c-a5a05c2fdf02","Type":"ContainerDied","Data":"0ae570f7b52d2a284c5bd60307d9619ec2e4a195d78f5231c72c91f6c9ebc389"} Jan 27 21:50:50 crc kubenswrapper[4803]: I0127 21:50:50.825209 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmd2q" event={"ID":"f63e0833-14f7-4d43-805c-a5a05c2fdf02","Type":"ContainerStarted","Data":"990039d243b7a0d79cc1a8360fb8706ad0615ac19a422edb3af2c75e5f3fc675"} Jan 27 21:50:50 crc kubenswrapper[4803]: I0127 21:50:50.847696 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pmd2q" podStartSLOduration=2.901860508 podStartE2EDuration="55.847677717s" podCreationTimestamp="2026-01-27 21:49:55 +0000 UTC" firstStartedPulling="2026-01-27 21:49:57.255727409 +0000 UTC m=+149.671749098" lastFinishedPulling="2026-01-27 21:50:50.201544578 +0000 UTC m=+202.617566307" observedRunningTime="2026-01-27 21:50:50.842348085 +0000 UTC m=+203.258369794" watchObservedRunningTime="2026-01-27 21:50:50.847677717 +0000 UTC m=+203.263699426" Jan 27 21:50:55 crc kubenswrapper[4803]: I0127 21:50:55.510230 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:50:55 crc kubenswrapper[4803]: I0127 21:50:55.510563 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:50:55 crc kubenswrapper[4803]: I0127 21:50:55.681875 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:50:55 crc kubenswrapper[4803]: I0127 21:50:55.854304 4803 generic.go:334] "Generic (PLEG): container finished" podID="68252b8f-1a1c-46c9-b037-743fd227e55a" containerID="9b2b97d7b908771e655739dadaff5fb2678ea4ce2ceaeabdc60780c73c2720a0" exitCode=0 Jan 27 21:50:55 crc kubenswrapper[4803]: I0127 21:50:55.854366 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjx65" event={"ID":"68252b8f-1a1c-46c9-b037-743fd227e55a","Type":"ContainerDied","Data":"9b2b97d7b908771e655739dadaff5fb2678ea4ce2ceaeabdc60780c73c2720a0"} Jan 27 21:50:55 crc kubenswrapper[4803]: I0127 21:50:55.857151 4803 generic.go:334] "Generic (PLEG): container finished" podID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerID="74b606ff93d974fdbaaa767f644aeeddc1b689fb36311f71f805e813fea696d9" exitCode=0 Jan 27 21:50:55 crc kubenswrapper[4803]: I0127 21:50:55.858534 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cppwp" event={"ID":"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a","Type":"ContainerDied","Data":"74b606ff93d974fdbaaa767f644aeeddc1b689fb36311f71f805e813fea696d9"} Jan 27 21:50:55 crc kubenswrapper[4803]: I0127 21:50:55.903239 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:50:56 crc kubenswrapper[4803]: I0127 21:50:56.865985 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cppwp" event={"ID":"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a","Type":"ContainerStarted","Data":"8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2"} Jan 27 21:50:56 crc kubenswrapper[4803]: I0127 21:50:56.869735 4803 generic.go:334] "Generic (PLEG): container finished" podID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" containerID="ea2e4d96356794a469c077c354c1730ca3e53cbfde7e939c9cb1a5893132e6b8" exitCode=0 Jan 27 21:50:56 crc kubenswrapper[4803]: I0127 21:50:56.869807 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbg9j" event={"ID":"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1","Type":"ContainerDied","Data":"ea2e4d96356794a469c077c354c1730ca3e53cbfde7e939c9cb1a5893132e6b8"} Jan 27 21:50:56 crc kubenswrapper[4803]: I0127 21:50:56.872356 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjx65" event={"ID":"68252b8f-1a1c-46c9-b037-743fd227e55a","Type":"ContainerStarted","Data":"f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b"} Jan 27 21:50:56 crc kubenswrapper[4803]: I0127 21:50:56.885106 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cppwp" podStartSLOduration=2.904275938 podStartE2EDuration="59.885091324s" podCreationTimestamp="2026-01-27 21:49:57 +0000 UTC" firstStartedPulling="2026-01-27 21:49:59.370746559 +0000 UTC m=+151.786768268" lastFinishedPulling="2026-01-27 21:50:56.351561955 +0000 UTC m=+208.767583654" observedRunningTime="2026-01-27 21:50:56.884062907 +0000 UTC m=+209.300084616" watchObservedRunningTime="2026-01-27 21:50:56.885091324 +0000 UTC m=+209.301113023" Jan 27 21:50:56 crc kubenswrapper[4803]: I0127 21:50:56.918139 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjx65" podStartSLOduration=2.883333723 podStartE2EDuration="1m1.918120827s" podCreationTimestamp="2026-01-27 21:49:55 +0000 UTC" firstStartedPulling="2026-01-27 21:49:57.252351769 +0000 UTC m=+149.668373478" lastFinishedPulling="2026-01-27 21:50:56.287138883 +0000 UTC m=+208.703160582" observedRunningTime="2026-01-27 21:50:56.917403498 +0000 UTC m=+209.333425197" watchObservedRunningTime="2026-01-27 21:50:56.918120827 +0000 UTC m=+209.334142526" Jan 27 21:50:57 crc kubenswrapper[4803]: I0127 21:50:57.800109 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7x4wr"] Jan 27 21:50:57 crc kubenswrapper[4803]: I0127 21:50:57.878549 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbg9j" event={"ID":"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1","Type":"ContainerStarted","Data":"5f8c63c87ebdf26cc3572e28225590b76f0b908e5448fd4746f2d7efc03e741e"} Jan 27 21:50:57 crc kubenswrapper[4803]: I0127 21:50:57.900861 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sbg9j" podStartSLOduration=1.909332516 podStartE2EDuration="1m0.900826531s" podCreationTimestamp="2026-01-27 21:49:57 +0000 UTC" firstStartedPulling="2026-01-27 21:49:58.292689726 +0000 UTC m=+150.708711425" lastFinishedPulling="2026-01-27 21:50:57.284183741 +0000 UTC m=+209.700205440" observedRunningTime="2026-01-27 21:50:57.900215965 +0000 UTC m=+210.316237664" watchObservedRunningTime="2026-01-27 21:50:57.900826531 +0000 UTC m=+210.316848230" Jan 27 21:50:57 crc kubenswrapper[4803]: I0127 21:50:57.943738 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:50:57 crc kubenswrapper[4803]: I0127 21:50:57.943794 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:50:58 crc kubenswrapper[4803]: I0127 21:50:58.887748 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrpjf" event={"ID":"467bcdf9-e419-4ef2-84af-2cfedbfa28f2","Type":"ContainerStarted","Data":"ff03cf3e18368e2a72134fdbe6b40f5d3160fa446e035b6b0e8cbc8030700a17"} Jan 27 21:50:59 crc kubenswrapper[4803]: I0127 21:50:59.008507 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cppwp" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerName="registry-server" probeResult="failure" output=< Jan 27 21:50:59 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 21:50:59 crc kubenswrapper[4803]: > Jan 27 21:50:59 crc kubenswrapper[4803]: I0127 21:50:59.894057 4803 generic.go:334] "Generic (PLEG): container finished" podID="6a2e67f5-2414-4850-a255-53737799d98b" containerID="caf1730c6fee8c1714eb37929b9ba40dacf759c9f3a3887c8b405380564a10f3" exitCode=0 Jan 27 21:50:59 crc kubenswrapper[4803]: I0127 21:50:59.894114 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24l6" event={"ID":"6a2e67f5-2414-4850-a255-53737799d98b","Type":"ContainerDied","Data":"caf1730c6fee8c1714eb37929b9ba40dacf759c9f3a3887c8b405380564a10f3"} Jan 27 21:50:59 crc kubenswrapper[4803]: I0127 21:50:59.897081 4803 generic.go:334] "Generic (PLEG): container finished" podID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" containerID="ff03cf3e18368e2a72134fdbe6b40f5d3160fa446e035b6b0e8cbc8030700a17" exitCode=0 Jan 27 21:50:59 crc kubenswrapper[4803]: I0127 21:50:59.897161 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrpjf" event={"ID":"467bcdf9-e419-4ef2-84af-2cfedbfa28f2","Type":"ContainerDied","Data":"ff03cf3e18368e2a72134fdbe6b40f5d3160fa446e035b6b0e8cbc8030700a17"} Jan 27 21:50:59 crc kubenswrapper[4803]: I0127 21:50:59.900978 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmmdh" event={"ID":"5b352572-d580-4fb4-b60a-49db17322472","Type":"ContainerStarted","Data":"42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d"} Jan 27 21:51:00 crc kubenswrapper[4803]: I0127 21:51:00.908334 4803 generic.go:334] "Generic (PLEG): container finished" podID="0d40bdc5-4adc-4650-934f-265f8614a1bb" containerID="61af021de2f9f80a61645c8a4f455fa89f55d8b4e46c10af763d1a86e4199937" exitCode=0 Jan 27 21:51:00 crc kubenswrapper[4803]: I0127 21:51:00.908417 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjdxb" event={"ID":"0d40bdc5-4adc-4650-934f-265f8614a1bb","Type":"ContainerDied","Data":"61af021de2f9f80a61645c8a4f455fa89f55d8b4e46c10af763d1a86e4199937"} Jan 27 21:51:00 crc kubenswrapper[4803]: I0127 21:51:00.912778 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24l6" event={"ID":"6a2e67f5-2414-4850-a255-53737799d98b","Type":"ContainerStarted","Data":"c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134"} Jan 27 21:51:00 crc kubenswrapper[4803]: I0127 21:51:00.915369 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrpjf" event={"ID":"467bcdf9-e419-4ef2-84af-2cfedbfa28f2","Type":"ContainerStarted","Data":"a9528c792f84d4a25d37955d284f8e27afa90ac4949ae3fa3f4e51b091ce208c"} Jan 27 21:51:00 crc kubenswrapper[4803]: I0127 21:51:00.917664 4803 generic.go:334] "Generic (PLEG): container finished" podID="5b352572-d580-4fb4-b60a-49db17322472" containerID="42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d" exitCode=0 Jan 27 21:51:00 crc kubenswrapper[4803]: I0127 21:51:00.917713 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmmdh" event={"ID":"5b352572-d580-4fb4-b60a-49db17322472","Type":"ContainerDied","Data":"42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d"} Jan 27 21:51:00 crc kubenswrapper[4803]: I0127 21:51:00.917748 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmmdh" event={"ID":"5b352572-d580-4fb4-b60a-49db17322472","Type":"ContainerStarted","Data":"b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f"} Jan 27 21:51:00 crc kubenswrapper[4803]: I0127 21:51:00.957527 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bmmdh" podStartSLOduration=3.058221411 podStartE2EDuration="1m2.957511594s" podCreationTimestamp="2026-01-27 21:49:58 +0000 UTC" firstStartedPulling="2026-01-27 21:50:00.396989939 +0000 UTC m=+152.813011638" lastFinishedPulling="2026-01-27 21:51:00.296280122 +0000 UTC m=+212.712301821" observedRunningTime="2026-01-27 21:51:00.953777995 +0000 UTC m=+213.369799694" watchObservedRunningTime="2026-01-27 21:51:00.957511594 +0000 UTC m=+213.373533293" Jan 27 21:51:00 crc kubenswrapper[4803]: I0127 21:51:00.977867 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m24l6" podStartSLOduration=3.853578467 podStartE2EDuration="1m6.977826247s" podCreationTimestamp="2026-01-27 21:49:54 +0000 UTC" firstStartedPulling="2026-01-27 21:49:57.244113379 +0000 UTC m=+149.660135078" lastFinishedPulling="2026-01-27 21:51:00.368361159 +0000 UTC m=+212.784382858" observedRunningTime="2026-01-27 21:51:00.974899549 +0000 UTC m=+213.390921238" watchObservedRunningTime="2026-01-27 21:51:00.977826247 +0000 UTC m=+213.393847946" Jan 27 21:51:00 crc kubenswrapper[4803]: I0127 21:51:00.994547 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wrpjf" podStartSLOduration=1.9572398629999999 podStartE2EDuration="1m2.994531413s" podCreationTimestamp="2026-01-27 21:49:58 +0000 UTC" firstStartedPulling="2026-01-27 21:49:59.370326658 +0000 UTC m=+151.786348357" lastFinishedPulling="2026-01-27 21:51:00.407618208 +0000 UTC m=+212.823639907" observedRunningTime="2026-01-27 21:51:00.993087045 +0000 UTC m=+213.409108754" watchObservedRunningTime="2026-01-27 21:51:00.994531413 +0000 UTC m=+213.410553112" Jan 27 21:51:01 crc kubenswrapper[4803]: I0127 21:51:01.924892 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjdxb" event={"ID":"0d40bdc5-4adc-4650-934f-265f8614a1bb","Type":"ContainerStarted","Data":"f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225"} Jan 27 21:51:01 crc kubenswrapper[4803]: I0127 21:51:01.947300 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fjdxb" podStartSLOduration=2.879823949 podStartE2EDuration="1m6.947277837s" podCreationTimestamp="2026-01-27 21:49:55 +0000 UTC" firstStartedPulling="2026-01-27 21:49:57.240190394 +0000 UTC m=+149.656212093" lastFinishedPulling="2026-01-27 21:51:01.307644282 +0000 UTC m=+213.723665981" observedRunningTime="2026-01-27 21:51:01.942315144 +0000 UTC m=+214.358336863" watchObservedRunningTime="2026-01-27 21:51:01.947277837 +0000 UTC m=+214.363299536" Jan 27 21:51:05 crc kubenswrapper[4803]: I0127 21:51:05.289657 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:51:05 crc kubenswrapper[4803]: I0127 21:51:05.290104 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:51:05 crc kubenswrapper[4803]: I0127 21:51:05.335069 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:51:05 crc kubenswrapper[4803]: I0127 21:51:05.971511 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:51:05 crc kubenswrapper[4803]: I0127 21:51:05.971975 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:51:05 crc kubenswrapper[4803]: I0127 21:51:05.978195 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:51:05 crc kubenswrapper[4803]: I0127 21:51:05.990480 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:51:05 crc kubenswrapper[4803]: I0127 21:51:05.990517 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:51:06 crc kubenswrapper[4803]: I0127 21:51:06.013460 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:51:06 crc kubenswrapper[4803]: I0127 21:51:06.036778 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:51:06 crc kubenswrapper[4803]: I0127 21:51:06.985354 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:51:06 crc kubenswrapper[4803]: I0127 21:51:06.987361 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:51:07 crc kubenswrapper[4803]: I0127 21:51:07.471959 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:51:07 crc kubenswrapper[4803]: I0127 21:51:07.472009 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:51:07 crc kubenswrapper[4803]: I0127 21:51:07.506708 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:51:07 crc kubenswrapper[4803]: I0127 21:51:07.979703 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:51:07 crc kubenswrapper[4803]: I0127 21:51:07.986361 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.021146 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.338331 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjx65"] Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.497645 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.497964 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.532570 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.937119 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.937178 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.937240 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fjdxb"] Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.954376 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fjdxb" podUID="0d40bdc5-4adc-4650-934f-265f8614a1bb" containerName="registry-server" containerID="cri-o://f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225" gracePeriod=2 Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.955533 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gjx65" podUID="68252b8f-1a1c-46c9-b037-743fd227e55a" containerName="registry-server" containerID="cri-o://f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b" gracePeriod=2 Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.984267 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:51:08 crc kubenswrapper[4803]: I0127 21:51:08.992332 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.324023 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.330302 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.396727 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-utilities\") pod \"68252b8f-1a1c-46c9-b037-743fd227e55a\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.396794 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-utilities\") pod \"0d40bdc5-4adc-4650-934f-265f8614a1bb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.396839 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bvwz\" (UniqueName: \"kubernetes.io/projected/0d40bdc5-4adc-4650-934f-265f8614a1bb-kube-api-access-5bvwz\") pod \"0d40bdc5-4adc-4650-934f-265f8614a1bb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.396962 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hjl7\" (UniqueName: \"kubernetes.io/projected/68252b8f-1a1c-46c9-b037-743fd227e55a-kube-api-access-7hjl7\") pod \"68252b8f-1a1c-46c9-b037-743fd227e55a\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.396992 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-catalog-content\") pod \"0d40bdc5-4adc-4650-934f-265f8614a1bb\" (UID: \"0d40bdc5-4adc-4650-934f-265f8614a1bb\") " Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.397038 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-catalog-content\") pod \"68252b8f-1a1c-46c9-b037-743fd227e55a\" (UID: \"68252b8f-1a1c-46c9-b037-743fd227e55a\") " Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.398343 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-utilities" (OuterVolumeSpecName: "utilities") pod "68252b8f-1a1c-46c9-b037-743fd227e55a" (UID: "68252b8f-1a1c-46c9-b037-743fd227e55a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.398391 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-utilities" (OuterVolumeSpecName: "utilities") pod "0d40bdc5-4adc-4650-934f-265f8614a1bb" (UID: "0d40bdc5-4adc-4650-934f-265f8614a1bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.403226 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68252b8f-1a1c-46c9-b037-743fd227e55a-kube-api-access-7hjl7" (OuterVolumeSpecName: "kube-api-access-7hjl7") pod "68252b8f-1a1c-46c9-b037-743fd227e55a" (UID: "68252b8f-1a1c-46c9-b037-743fd227e55a"). InnerVolumeSpecName "kube-api-access-7hjl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.404360 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d40bdc5-4adc-4650-934f-265f8614a1bb-kube-api-access-5bvwz" (OuterVolumeSpecName: "kube-api-access-5bvwz") pod "0d40bdc5-4adc-4650-934f-265f8614a1bb" (UID: "0d40bdc5-4adc-4650-934f-265f8614a1bb"). InnerVolumeSpecName "kube-api-access-5bvwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.443773 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d40bdc5-4adc-4650-934f-265f8614a1bb" (UID: "0d40bdc5-4adc-4650-934f-265f8614a1bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.449145 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68252b8f-1a1c-46c9-b037-743fd227e55a" (UID: "68252b8f-1a1c-46c9-b037-743fd227e55a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.498441 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.498462 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.498472 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bvwz\" (UniqueName: \"kubernetes.io/projected/0d40bdc5-4adc-4650-934f-265f8614a1bb-kube-api-access-5bvwz\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.498483 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hjl7\" (UniqueName: \"kubernetes.io/projected/68252b8f-1a1c-46c9-b037-743fd227e55a-kube-api-access-7hjl7\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.498490 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d40bdc5-4adc-4650-934f-265f8614a1bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.498499 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68252b8f-1a1c-46c9-b037-743fd227e55a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.972520 4803 generic.go:334] "Generic (PLEG): container finished" podID="0d40bdc5-4adc-4650-934f-265f8614a1bb" containerID="f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225" exitCode=0 Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.972601 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjdxb" event={"ID":"0d40bdc5-4adc-4650-934f-265f8614a1bb","Type":"ContainerDied","Data":"f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225"} Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.972949 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjdxb" event={"ID":"0d40bdc5-4adc-4650-934f-265f8614a1bb","Type":"ContainerDied","Data":"557e205a87dc377c68ec11d3e908667194130ba4dc412c994b1c71b1b4d1ad93"} Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.972973 4803 scope.go:117] "RemoveContainer" containerID="f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.972663 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjdxb" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.981588 4803 generic.go:334] "Generic (PLEG): container finished" podID="68252b8f-1a1c-46c9-b037-743fd227e55a" containerID="f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b" exitCode=0 Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.981642 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjx65" Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.981632 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjx65" event={"ID":"68252b8f-1a1c-46c9-b037-743fd227e55a","Type":"ContainerDied","Data":"f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b"} Jan 27 21:51:09 crc kubenswrapper[4803]: I0127 21:51:09.981687 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjx65" event={"ID":"68252b8f-1a1c-46c9-b037-743fd227e55a","Type":"ContainerDied","Data":"eebf30b43121acf3e36c21db119cb54040b3d0afbcb2c53ad58b3ea8441b836d"} Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.006397 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fjdxb"] Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.014001 4803 scope.go:117] "RemoveContainer" containerID="61af021de2f9f80a61645c8a4f455fa89f55d8b4e46c10af763d1a86e4199937" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.014046 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fjdxb"] Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.024470 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.043719 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjx65"] Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.046267 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gjx65"] Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.053433 4803 scope.go:117] "RemoveContainer" containerID="c64a397b3924cb57667fd396c60f6903352c7f18de2ab1f2eb5066355c9f1479" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.068413 4803 scope.go:117] "RemoveContainer" containerID="f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225" Jan 27 21:51:10 crc kubenswrapper[4803]: E0127 21:51:10.070287 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225\": container with ID starting with f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225 not found: ID does not exist" containerID="f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.070321 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225"} err="failed to get container status \"f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225\": rpc error: code = NotFound desc = could not find container \"f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225\": container with ID starting with f9f922e0b24a1313ff9a857cf9bbbb3e0e6a96469681c752c88b1c6275df5225 not found: ID does not exist" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.070367 4803 scope.go:117] "RemoveContainer" containerID="61af021de2f9f80a61645c8a4f455fa89f55d8b4e46c10af763d1a86e4199937" Jan 27 21:51:10 crc kubenswrapper[4803]: E0127 21:51:10.070624 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61af021de2f9f80a61645c8a4f455fa89f55d8b4e46c10af763d1a86e4199937\": container with ID starting with 61af021de2f9f80a61645c8a4f455fa89f55d8b4e46c10af763d1a86e4199937 not found: ID does not exist" containerID="61af021de2f9f80a61645c8a4f455fa89f55d8b4e46c10af763d1a86e4199937" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.070668 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61af021de2f9f80a61645c8a4f455fa89f55d8b4e46c10af763d1a86e4199937"} err="failed to get container status \"61af021de2f9f80a61645c8a4f455fa89f55d8b4e46c10af763d1a86e4199937\": rpc error: code = NotFound desc = could not find container \"61af021de2f9f80a61645c8a4f455fa89f55d8b4e46c10af763d1a86e4199937\": container with ID starting with 61af021de2f9f80a61645c8a4f455fa89f55d8b4e46c10af763d1a86e4199937 not found: ID does not exist" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.070682 4803 scope.go:117] "RemoveContainer" containerID="c64a397b3924cb57667fd396c60f6903352c7f18de2ab1f2eb5066355c9f1479" Jan 27 21:51:10 crc kubenswrapper[4803]: E0127 21:51:10.070964 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c64a397b3924cb57667fd396c60f6903352c7f18de2ab1f2eb5066355c9f1479\": container with ID starting with c64a397b3924cb57667fd396c60f6903352c7f18de2ab1f2eb5066355c9f1479 not found: ID does not exist" containerID="c64a397b3924cb57667fd396c60f6903352c7f18de2ab1f2eb5066355c9f1479" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.070983 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64a397b3924cb57667fd396c60f6903352c7f18de2ab1f2eb5066355c9f1479"} err="failed to get container status \"c64a397b3924cb57667fd396c60f6903352c7f18de2ab1f2eb5066355c9f1479\": rpc error: code = NotFound desc = could not find container \"c64a397b3924cb57667fd396c60f6903352c7f18de2ab1f2eb5066355c9f1479\": container with ID starting with c64a397b3924cb57667fd396c60f6903352c7f18de2ab1f2eb5066355c9f1479 not found: ID does not exist" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.070994 4803 scope.go:117] "RemoveContainer" containerID="f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.096929 4803 scope.go:117] "RemoveContainer" containerID="9b2b97d7b908771e655739dadaff5fb2678ea4ce2ceaeabdc60780c73c2720a0" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.113551 4803 scope.go:117] "RemoveContainer" containerID="a4fc6b1a227c2d4ec40cff3c79bb5eb3d9e89a7be977be1fcc926104d24cdb1c" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.135970 4803 scope.go:117] "RemoveContainer" containerID="f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b" Jan 27 21:51:10 crc kubenswrapper[4803]: E0127 21:51:10.136402 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b\": container with ID starting with f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b not found: ID does not exist" containerID="f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.136469 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b"} err="failed to get container status \"f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b\": rpc error: code = NotFound desc = could not find container \"f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b\": container with ID starting with f0e6bafbb264dfd2926a476b7f1f55d964971249cea480e47dcae740068d116b not found: ID does not exist" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.136509 4803 scope.go:117] "RemoveContainer" containerID="9b2b97d7b908771e655739dadaff5fb2678ea4ce2ceaeabdc60780c73c2720a0" Jan 27 21:51:10 crc kubenswrapper[4803]: E0127 21:51:10.136824 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b2b97d7b908771e655739dadaff5fb2678ea4ce2ceaeabdc60780c73c2720a0\": container with ID starting with 9b2b97d7b908771e655739dadaff5fb2678ea4ce2ceaeabdc60780c73c2720a0 not found: ID does not exist" containerID="9b2b97d7b908771e655739dadaff5fb2678ea4ce2ceaeabdc60780c73c2720a0" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.136866 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b2b97d7b908771e655739dadaff5fb2678ea4ce2ceaeabdc60780c73c2720a0"} err="failed to get container status \"9b2b97d7b908771e655739dadaff5fb2678ea4ce2ceaeabdc60780c73c2720a0\": rpc error: code = NotFound desc = could not find container \"9b2b97d7b908771e655739dadaff5fb2678ea4ce2ceaeabdc60780c73c2720a0\": container with ID starting with 9b2b97d7b908771e655739dadaff5fb2678ea4ce2ceaeabdc60780c73c2720a0 not found: ID does not exist" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.136888 4803 scope.go:117] "RemoveContainer" containerID="a4fc6b1a227c2d4ec40cff3c79bb5eb3d9e89a7be977be1fcc926104d24cdb1c" Jan 27 21:51:10 crc kubenswrapper[4803]: E0127 21:51:10.137143 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4fc6b1a227c2d4ec40cff3c79bb5eb3d9e89a7be977be1fcc926104d24cdb1c\": container with ID starting with a4fc6b1a227c2d4ec40cff3c79bb5eb3d9e89a7be977be1fcc926104d24cdb1c not found: ID does not exist" containerID="a4fc6b1a227c2d4ec40cff3c79bb5eb3d9e89a7be977be1fcc926104d24cdb1c" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.137175 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4fc6b1a227c2d4ec40cff3c79bb5eb3d9e89a7be977be1fcc926104d24cdb1c"} err="failed to get container status \"a4fc6b1a227c2d4ec40cff3c79bb5eb3d9e89a7be977be1fcc926104d24cdb1c\": rpc error: code = NotFound desc = could not find container \"a4fc6b1a227c2d4ec40cff3c79bb5eb3d9e89a7be977be1fcc926104d24cdb1c\": container with ID starting with a4fc6b1a227c2d4ec40cff3c79bb5eb3d9e89a7be977be1fcc926104d24cdb1c not found: ID does not exist" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.314579 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d40bdc5-4adc-4650-934f-265f8614a1bb" path="/var/lib/kubelet/pods/0d40bdc5-4adc-4650-934f-265f8614a1bb/volumes" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.315280 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68252b8f-1a1c-46c9-b037-743fd227e55a" path="/var/lib/kubelet/pods/68252b8f-1a1c-46c9-b037-743fd227e55a/volumes" Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.737700 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cppwp"] Jan 27 21:51:10 crc kubenswrapper[4803]: I0127 21:51:10.738043 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cppwp" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerName="registry-server" containerID="cri-o://8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2" gracePeriod=2 Jan 27 21:51:11 crc kubenswrapper[4803]: I0127 21:51:11.944069 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:51:11 crc kubenswrapper[4803]: I0127 21:51:11.995455 4803 generic.go:334] "Generic (PLEG): container finished" podID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerID="8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2" exitCode=0 Jan 27 21:51:11 crc kubenswrapper[4803]: I0127 21:51:11.995493 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cppwp" event={"ID":"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a","Type":"ContainerDied","Data":"8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2"} Jan 27 21:51:11 crc kubenswrapper[4803]: I0127 21:51:11.995518 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cppwp" event={"ID":"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a","Type":"ContainerDied","Data":"566559198e450756c86974e4ec477fa99c349978845be2ad15f94e6a022fb8ff"} Jan 27 21:51:11 crc kubenswrapper[4803]: I0127 21:51:11.995534 4803 scope.go:117] "RemoveContainer" containerID="8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2" Jan 27 21:51:11 crc kubenswrapper[4803]: I0127 21:51:11.995611 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cppwp" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.009991 4803 scope.go:117] "RemoveContainer" containerID="74b606ff93d974fdbaaa767f644aeeddc1b689fb36311f71f805e813fea696d9" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.030987 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-utilities\") pod \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.031065 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sllc\" (UniqueName: \"kubernetes.io/projected/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-kube-api-access-4sllc\") pod \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.031271 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-catalog-content\") pod \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\" (UID: \"0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a\") " Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.032084 4803 scope.go:117] "RemoveContainer" containerID="af1799e698d918ba85429c21fc20096fabca6be4b6f8a4ab56922b9090744f3d" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.033587 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-utilities" (OuterVolumeSpecName: "utilities") pod "0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" (UID: "0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.037595 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-kube-api-access-4sllc" (OuterVolumeSpecName: "kube-api-access-4sllc") pod "0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" (UID: "0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a"). InnerVolumeSpecName "kube-api-access-4sllc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.053136 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" (UID: "0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.054550 4803 scope.go:117] "RemoveContainer" containerID="8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2" Jan 27 21:51:12 crc kubenswrapper[4803]: E0127 21:51:12.055359 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2\": container with ID starting with 8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2 not found: ID does not exist" containerID="8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.055390 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2"} err="failed to get container status \"8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2\": rpc error: code = NotFound desc = could not find container \"8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2\": container with ID starting with 8afb8b44e65dc331ebc85faf9feffd10c0f0f2ffb98c99b2b86f02ddb4bc92d2 not found: ID does not exist" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.055419 4803 scope.go:117] "RemoveContainer" containerID="74b606ff93d974fdbaaa767f644aeeddc1b689fb36311f71f805e813fea696d9" Jan 27 21:51:12 crc kubenswrapper[4803]: E0127 21:51:12.055742 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74b606ff93d974fdbaaa767f644aeeddc1b689fb36311f71f805e813fea696d9\": container with ID starting with 74b606ff93d974fdbaaa767f644aeeddc1b689fb36311f71f805e813fea696d9 not found: ID does not exist" containerID="74b606ff93d974fdbaaa767f644aeeddc1b689fb36311f71f805e813fea696d9" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.055785 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74b606ff93d974fdbaaa767f644aeeddc1b689fb36311f71f805e813fea696d9"} err="failed to get container status \"74b606ff93d974fdbaaa767f644aeeddc1b689fb36311f71f805e813fea696d9\": rpc error: code = NotFound desc = could not find container \"74b606ff93d974fdbaaa767f644aeeddc1b689fb36311f71f805e813fea696d9\": container with ID starting with 74b606ff93d974fdbaaa767f644aeeddc1b689fb36311f71f805e813fea696d9 not found: ID does not exist" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.055811 4803 scope.go:117] "RemoveContainer" containerID="af1799e698d918ba85429c21fc20096fabca6be4b6f8a4ab56922b9090744f3d" Jan 27 21:51:12 crc kubenswrapper[4803]: E0127 21:51:12.056155 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af1799e698d918ba85429c21fc20096fabca6be4b6f8a4ab56922b9090744f3d\": container with ID starting with af1799e698d918ba85429c21fc20096fabca6be4b6f8a4ab56922b9090744f3d not found: ID does not exist" containerID="af1799e698d918ba85429c21fc20096fabca6be4b6f8a4ab56922b9090744f3d" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.056184 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af1799e698d918ba85429c21fc20096fabca6be4b6f8a4ab56922b9090744f3d"} err="failed to get container status \"af1799e698d918ba85429c21fc20096fabca6be4b6f8a4ab56922b9090744f3d\": rpc error: code = NotFound desc = could not find container \"af1799e698d918ba85429c21fc20096fabca6be4b6f8a4ab56922b9090744f3d\": container with ID starting with af1799e698d918ba85429c21fc20096fabca6be4b6f8a4ab56922b9090744f3d not found: ID does not exist" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.132252 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.132282 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.132294 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sllc\" (UniqueName: \"kubernetes.io/projected/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a-kube-api-access-4sllc\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.344236 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cppwp"] Jan 27 21:51:12 crc kubenswrapper[4803]: I0127 21:51:12.348452 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cppwp"] Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.143556 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bmmdh"] Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.143936 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bmmdh" podUID="5b352572-d580-4fb4-b60a-49db17322472" containerName="registry-server" containerID="cri-o://b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f" gracePeriod=2 Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.497446 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.653621 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-utilities\") pod \"5b352572-d580-4fb4-b60a-49db17322472\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.653672 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpflg\" (UniqueName: \"kubernetes.io/projected/5b352572-d580-4fb4-b60a-49db17322472-kube-api-access-wpflg\") pod \"5b352572-d580-4fb4-b60a-49db17322472\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.653716 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-catalog-content\") pod \"5b352572-d580-4fb4-b60a-49db17322472\" (UID: \"5b352572-d580-4fb4-b60a-49db17322472\") " Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.654672 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-utilities" (OuterVolumeSpecName: "utilities") pod "5b352572-d580-4fb4-b60a-49db17322472" (UID: "5b352572-d580-4fb4-b60a-49db17322472"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.657210 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b352572-d580-4fb4-b60a-49db17322472-kube-api-access-wpflg" (OuterVolumeSpecName: "kube-api-access-wpflg") pod "5b352572-d580-4fb4-b60a-49db17322472" (UID: "5b352572-d580-4fb4-b60a-49db17322472"). InnerVolumeSpecName "kube-api-access-wpflg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.755147 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.755185 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpflg\" (UniqueName: \"kubernetes.io/projected/5b352572-d580-4fb4-b60a-49db17322472-kube-api-access-wpflg\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.769980 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b352572-d580-4fb4-b60a-49db17322472" (UID: "5b352572-d580-4fb4-b60a-49db17322472"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:51:13 crc kubenswrapper[4803]: I0127 21:51:13.856109 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b352572-d580-4fb4-b60a-49db17322472-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.007290 4803 generic.go:334] "Generic (PLEG): container finished" podID="5b352572-d580-4fb4-b60a-49db17322472" containerID="b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f" exitCode=0 Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.007336 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmmdh" event={"ID":"5b352572-d580-4fb4-b60a-49db17322472","Type":"ContainerDied","Data":"b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f"} Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.007356 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmmdh" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.007371 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmmdh" event={"ID":"5b352572-d580-4fb4-b60a-49db17322472","Type":"ContainerDied","Data":"aba4572ff36784d3a76c91fe7b11445b71caf5afffddeb4c384c33e64aa4778f"} Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.007393 4803 scope.go:117] "RemoveContainer" containerID="b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.020079 4803 scope.go:117] "RemoveContainer" containerID="42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.040968 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bmmdh"] Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.044125 4803 scope.go:117] "RemoveContainer" containerID="8f9ad76218981e627c854387ae1791a632ac49655477c48bcbc0361d4aea0ad1" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.044345 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bmmdh"] Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.061631 4803 scope.go:117] "RemoveContainer" containerID="b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f" Jan 27 21:51:14 crc kubenswrapper[4803]: E0127 21:51:14.062149 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f\": container with ID starting with b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f not found: ID does not exist" containerID="b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.062191 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f"} err="failed to get container status \"b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f\": rpc error: code = NotFound desc = could not find container \"b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f\": container with ID starting with b17d92b52ef07821b11d714f254520989eb820ed8a7a1b5ceb3d78508c458e7f not found: ID does not exist" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.062221 4803 scope.go:117] "RemoveContainer" containerID="42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d" Jan 27 21:51:14 crc kubenswrapper[4803]: E0127 21:51:14.062572 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d\": container with ID starting with 42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d not found: ID does not exist" containerID="42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.062599 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d"} err="failed to get container status \"42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d\": rpc error: code = NotFound desc = could not find container \"42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d\": container with ID starting with 42020f0afc20f62faf7f165e2e409a8a8a7bfec3b761b01a5db6da6e0202984d not found: ID does not exist" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.062615 4803 scope.go:117] "RemoveContainer" containerID="8f9ad76218981e627c854387ae1791a632ac49655477c48bcbc0361d4aea0ad1" Jan 27 21:51:14 crc kubenswrapper[4803]: E0127 21:51:14.062874 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9ad76218981e627c854387ae1791a632ac49655477c48bcbc0361d4aea0ad1\": container with ID starting with 8f9ad76218981e627c854387ae1791a632ac49655477c48bcbc0361d4aea0ad1 not found: ID does not exist" containerID="8f9ad76218981e627c854387ae1791a632ac49655477c48bcbc0361d4aea0ad1" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.062908 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9ad76218981e627c854387ae1791a632ac49655477c48bcbc0361d4aea0ad1"} err="failed to get container status \"8f9ad76218981e627c854387ae1791a632ac49655477c48bcbc0361d4aea0ad1\": rpc error: code = NotFound desc = could not find container \"8f9ad76218981e627c854387ae1791a632ac49655477c48bcbc0361d4aea0ad1\": container with ID starting with 8f9ad76218981e627c854387ae1791a632ac49655477c48bcbc0361d4aea0ad1 not found: ID does not exist" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.312060 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" path="/var/lib/kubelet/pods/0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a/volumes" Jan 27 21:51:14 crc kubenswrapper[4803]: I0127 21:51:14.312640 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b352572-d580-4fb4-b60a-49db17322472" path="/var/lib/kubelet/pods/5b352572-d580-4fb4-b60a-49db17322472/volumes" Jan 27 21:51:16 crc kubenswrapper[4803]: I0127 21:51:16.343334 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:51:16 crc kubenswrapper[4803]: I0127 21:51:16.343418 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:51:16 crc kubenswrapper[4803]: I0127 21:51:16.343474 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:51:16 crc kubenswrapper[4803]: I0127 21:51:16.344237 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:51:16 crc kubenswrapper[4803]: I0127 21:51:16.344320 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7" gracePeriod=600 Jan 27 21:51:17 crc kubenswrapper[4803]: I0127 21:51:17.030892 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7" exitCode=0 Jan 27 21:51:17 crc kubenswrapper[4803]: I0127 21:51:17.030974 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7"} Jan 27 21:51:17 crc kubenswrapper[4803]: I0127 21:51:17.031618 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"eab3307c7662fa4415bdda98a4550f98a4f3e4518c2ba81876e66dccef2535a4"} Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.322212 4803 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323413 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b352572-d580-4fb4-b60a-49db17322472" containerName="extract-content" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323440 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b352572-d580-4fb4-b60a-49db17322472" containerName="extract-content" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323466 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerName="extract-content" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323485 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerName="extract-content" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323509 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d40bdc5-4adc-4650-934f-265f8614a1bb" containerName="extract-utilities" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323524 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d40bdc5-4adc-4650-934f-265f8614a1bb" containerName="extract-utilities" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323549 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerName="extract-utilities" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323562 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerName="extract-utilities" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323580 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323594 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323609 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68252b8f-1a1c-46c9-b037-743fd227e55a" containerName="extract-utilities" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323621 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="68252b8f-1a1c-46c9-b037-743fd227e55a" containerName="extract-utilities" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323634 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b352572-d580-4fb4-b60a-49db17322472" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323647 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b352572-d580-4fb4-b60a-49db17322472" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323665 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b352572-d580-4fb4-b60a-49db17322472" containerName="extract-utilities" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323678 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b352572-d580-4fb4-b60a-49db17322472" containerName="extract-utilities" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323693 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d40bdc5-4adc-4650-934f-265f8614a1bb" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323706 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d40bdc5-4adc-4650-934f-265f8614a1bb" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323722 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68252b8f-1a1c-46c9-b037-743fd227e55a" containerName="extract-content" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323734 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="68252b8f-1a1c-46c9-b037-743fd227e55a" containerName="extract-content" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323754 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d40bdc5-4adc-4650-934f-265f8614a1bb" containerName="extract-content" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323766 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d40bdc5-4adc-4650-934f-265f8614a1bb" containerName="extract-content" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323783 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68252b8f-1a1c-46c9-b037-743fd227e55a" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323794 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="68252b8f-1a1c-46c9-b037-743fd227e55a" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.323811 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0181537-cbf9-4620-8709-04a5fc5cf618" containerName="pruner" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.323824 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0181537-cbf9-4620-8709-04a5fc5cf618" containerName="pruner" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.324026 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cdbf4f1-7429-42d7-92fc-2ac4f2c91e7a" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.324048 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0181537-cbf9-4620-8709-04a5fc5cf618" containerName="pruner" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.324066 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b352572-d580-4fb4-b60a-49db17322472" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.324085 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d40bdc5-4adc-4650-934f-265f8614a1bb" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.324101 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="68252b8f-1a1c-46c9-b037-743fd227e55a" containerName="registry-server" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.324639 4803 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.324677 4803 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.324948 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.324996 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.325025 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.325047 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.325066 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.325095 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.325111 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.325181 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.325200 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.325232 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.325249 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.325267 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.325283 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.325670 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b" gracePeriod=15 Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.325671 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba" gracePeriod=15 Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.325882 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba" gracePeriod=15 Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.325937 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078" gracePeriod=15 Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.326009 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.326052 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.326076 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.326094 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.326153 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.326169 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.326215 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a" gracePeriod=15 Jan 27 21:51:22 crc kubenswrapper[4803]: E0127 21:51:22.327180 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.327250 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.332524 4803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.477002 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.477350 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.477392 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.477421 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.477712 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.477908 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.478083 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.478168 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579483 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579546 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579576 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579580 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579604 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579620 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579644 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579671 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579690 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579679 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579725 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579693 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579688 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579712 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579730 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.579747 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:22 crc kubenswrapper[4803]: I0127 21:51:22.821181 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" containerName="oauth-openshift" containerID="cri-o://c8339b8df1bc0afb36378438618a109239883f21ca96f3143202bfd9bfc32a13" gracePeriod=15 Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.074129 4803 generic.go:334] "Generic (PLEG): container finished" podID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" containerID="008eff428752ba796936f41a3c6dc0a1670c26dd4abc07b36febd2283e20c101" exitCode=0 Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.074179 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf","Type":"ContainerDied","Data":"008eff428752ba796936f41a3c6dc0a1670c26dd4abc07b36febd2283e20c101"} Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.075829 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.077746 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.079372 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.080029 4803 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b" exitCode=0 Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.080051 4803 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba" exitCode=0 Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.080062 4803 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba" exitCode=0 Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.080072 4803 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078" exitCode=2 Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.080148 4803 scope.go:117] "RemoveContainer" containerID="78c61c07622f6e69732dcff6c88d148ffa2dabffee85c4ea7bcf664ee3a377b2" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.082045 4803 generic.go:334] "Generic (PLEG): container finished" podID="70091f5f-e06c-4cf3-8bc8-299f10207363" containerID="c8339b8df1bc0afb36378438618a109239883f21ca96f3143202bfd9bfc32a13" exitCode=0 Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.082086 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" event={"ID":"70091f5f-e06c-4cf3-8bc8-299f10207363","Type":"ContainerDied","Data":"c8339b8df1bc0afb36378438618a109239883f21ca96f3143202bfd9bfc32a13"} Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.140338 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.141020 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.141900 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.288901 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-cliconfig\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.288988 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-login\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289039 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-error\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289062 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-serving-cert\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289095 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-policies\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289116 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-dir\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289173 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-ocp-branding-template\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289208 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-provider-selection\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289227 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbsg5\" (UniqueName: \"kubernetes.io/projected/70091f5f-e06c-4cf3-8bc8-299f10207363-kube-api-access-kbsg5\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289256 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-idp-0-file-data\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289286 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-trusted-ca-bundle\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289321 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-service-ca\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289338 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-router-certs\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.289364 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-session\") pod \"70091f5f-e06c-4cf3-8bc8-299f10207363\" (UID: \"70091f5f-e06c-4cf3-8bc8-299f10207363\") " Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.290002 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.290414 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.290508 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.290652 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.290669 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.295123 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70091f5f-e06c-4cf3-8bc8-299f10207363-kube-api-access-kbsg5" (OuterVolumeSpecName: "kube-api-access-kbsg5") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "kube-api-access-kbsg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.295320 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.295768 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.296175 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.296226 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.296351 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.296552 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.297023 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.297122 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "70091f5f-e06c-4cf3-8bc8-299f10207363" (UID: "70091f5f-e06c-4cf3-8bc8-299f10207363"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.391463 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.391514 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.391531 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.391549 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.391569 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.391584 4803 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.391710 4803 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70091f5f-e06c-4cf3-8bc8-299f10207363-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.392121 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.392164 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.392190 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbsg5\" (UniqueName: \"kubernetes.io/projected/70091f5f-e06c-4cf3-8bc8-299f10207363-kube-api-access-kbsg5\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.392212 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.392230 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.392249 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:23 crc kubenswrapper[4803]: I0127 21:51:23.392268 4803 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70091f5f-e06c-4cf3-8bc8-299f10207363-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.089051 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" event={"ID":"70091f5f-e06c-4cf3-8bc8-299f10207363","Type":"ContainerDied","Data":"5b338dca76870e9c377291e2af94b96822c7715b3a6fdc0306a22ccb8253ccd0"} Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.089271 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.089407 4803 scope.go:117] "RemoveContainer" containerID="c8339b8df1bc0afb36378438618a109239883f21ca96f3143202bfd9bfc32a13" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.090285 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.090607 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.092393 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.109393 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.109737 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.304482 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.305427 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.305918 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.405177 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kube-api-access\") pod \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.405272 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-var-lock\") pod \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.405370 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kubelet-dir\") pod \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\" (UID: \"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf\") " Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.405617 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-var-lock" (OuterVolumeSpecName: "var-lock") pod "d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" (UID: "d32a4347-7d5e-4c36-ab79-2815fa7b5fbf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.405660 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" (UID: "d32a4347-7d5e-4c36-ab79-2815fa7b5fbf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.405811 4803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.405827 4803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.413544 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" (UID: "d32a4347-7d5e-4c36-ab79-2815fa7b5fbf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.506823 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d32a4347-7d5e-4c36-ab79-2815fa7b5fbf-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.709862 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.711041 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.712137 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.712385 4803 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.712780 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.810124 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.810162 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.810230 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.810496 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.810524 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.810538 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.912028 4803 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.912355 4803 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:24 crc kubenswrapper[4803]: I0127 21:51:24.912441 4803 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.102385 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.102406 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d32a4347-7d5e-4c36-ab79-2815fa7b5fbf","Type":"ContainerDied","Data":"36f39860c2015bfc6c5befcccb6934e4957470c583f99ed8e9d48e5e6d762c37"} Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.102463 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36f39860c2015bfc6c5befcccb6934e4957470c583f99ed8e9d48e5e6d762c37" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.108680 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.109660 4803 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a" exitCode=0 Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.109692 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.109736 4803 scope.go:117] "RemoveContainer" containerID="6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.123635 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.124052 4803 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.124323 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.129095 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.129310 4803 scope.go:117] "RemoveContainer" containerID="dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.129729 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.130768 4803 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.143019 4803 scope.go:117] "RemoveContainer" containerID="3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.152362 4803 scope.go:117] "RemoveContainer" containerID="23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.168001 4803 scope.go:117] "RemoveContainer" containerID="17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.184550 4803 scope.go:117] "RemoveContainer" containerID="7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.207860 4803 scope.go:117] "RemoveContainer" containerID="6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b" Jan 27 21:51:25 crc kubenswrapper[4803]: E0127 21:51:25.208440 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\": container with ID starting with 6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b not found: ID does not exist" containerID="6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.208488 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b"} err="failed to get container status \"6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\": rpc error: code = NotFound desc = could not find container \"6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b\": container with ID starting with 6cfdcfa284bccad55c550bbbac949bb4531831bc200db9e5481c83f28c32100b not found: ID does not exist" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.208517 4803 scope.go:117] "RemoveContainer" containerID="dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba" Jan 27 21:51:25 crc kubenswrapper[4803]: E0127 21:51:25.208859 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\": container with ID starting with dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba not found: ID does not exist" containerID="dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.208885 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba"} err="failed to get container status \"dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\": rpc error: code = NotFound desc = could not find container \"dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba\": container with ID starting with dce315b0f6f393e3e2e02ad9407ae061e68cdec9b9a9da49d4469bf548c378ba not found: ID does not exist" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.208898 4803 scope.go:117] "RemoveContainer" containerID="3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba" Jan 27 21:51:25 crc kubenswrapper[4803]: E0127 21:51:25.209170 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\": container with ID starting with 3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba not found: ID does not exist" containerID="3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.209196 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba"} err="failed to get container status \"3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\": rpc error: code = NotFound desc = could not find container \"3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba\": container with ID starting with 3cd5ddea82fd758a9b1d3ddd673d958c547f1d1f3f4c1fc3c1033244b8d2e1ba not found: ID does not exist" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.209209 4803 scope.go:117] "RemoveContainer" containerID="23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078" Jan 27 21:51:25 crc kubenswrapper[4803]: E0127 21:51:25.209463 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\": container with ID starting with 23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078 not found: ID does not exist" containerID="23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.209496 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078"} err="failed to get container status \"23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\": rpc error: code = NotFound desc = could not find container \"23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078\": container with ID starting with 23bab592c619192c13c571fbb8cb9ba73387d63f32f7acddc01410dd6cca9078 not found: ID does not exist" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.209519 4803 scope.go:117] "RemoveContainer" containerID="17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a" Jan 27 21:51:25 crc kubenswrapper[4803]: E0127 21:51:25.209776 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\": container with ID starting with 17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a not found: ID does not exist" containerID="17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.209796 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a"} err="failed to get container status \"17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\": rpc error: code = NotFound desc = could not find container \"17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a\": container with ID starting with 17b456e914c01f48bdabb2d974ba29709091ca28d7ddbd8ee38449ebbbd00f0a not found: ID does not exist" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.209807 4803 scope.go:117] "RemoveContainer" containerID="7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a" Jan 27 21:51:25 crc kubenswrapper[4803]: E0127 21:51:25.210106 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\": container with ID starting with 7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a not found: ID does not exist" containerID="7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a" Jan 27 21:51:25 crc kubenswrapper[4803]: I0127 21:51:25.210129 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a"} err="failed to get container status \"7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\": rpc error: code = NotFound desc = could not find container \"7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a\": container with ID starting with 7499bae42c3eb94074a906243984636c8cc8cd207ab42393ea2be2edf1bbf78a not found: ID does not exist" Jan 27 21:51:26 crc kubenswrapper[4803]: I0127 21:51:26.318446 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 21:51:26 crc kubenswrapper[4803]: E0127 21:51:26.390134 4803 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" volumeName="registry-storage" Jan 27 21:51:27 crc kubenswrapper[4803]: E0127 21:51:27.388897 4803 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:27 crc kubenswrapper[4803]: I0127 21:51:27.389381 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:27 crc kubenswrapper[4803]: E0127 21:51:27.416720 4803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:27 crc kubenswrapper[4803]: E0127 21:51:27.417215 4803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:27 crc kubenswrapper[4803]: E0127 21:51:27.417554 4803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:27 crc kubenswrapper[4803]: E0127 21:51:27.417780 4803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:27 crc kubenswrapper[4803]: E0127 21:51:27.418044 4803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:27 crc kubenswrapper[4803]: I0127 21:51:27.418080 4803 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 21:51:27 crc kubenswrapper[4803]: E0127 21:51:27.418277 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Jan 27 21:51:27 crc kubenswrapper[4803]: W0127 21:51:27.424126 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-6affaa6466187aa6d51f61a554e6d6cef79742cf70bd37579888ec41dca6ae62 WatchSource:0}: Error finding container 6affaa6466187aa6d51f61a554e6d6cef79742cf70bd37579888ec41dca6ae62: Status 404 returned error can't find the container with id 6affaa6466187aa6d51f61a554e6d6cef79742cf70bd37579888ec41dca6ae62 Jan 27 21:51:27 crc kubenswrapper[4803]: E0127 21:51:27.433360 4803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eb4f988755410 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 21:51:27.432430608 +0000 UTC m=+239.848452307,LastTimestamp:2026-01-27 21:51:27.432430608 +0000 UTC m=+239.848452307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 21:51:27 crc kubenswrapper[4803]: E0127 21:51:27.619424 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Jan 27 21:51:27 crc kubenswrapper[4803]: E0127 21:51:27.668312 4803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eb4f988755410 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 21:51:27.432430608 +0000 UTC m=+239.848452307,LastTimestamp:2026-01-27 21:51:27.432430608 +0000 UTC m=+239.848452307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 21:51:28 crc kubenswrapper[4803]: E0127 21:51:28.021656 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Jan 27 21:51:28 crc kubenswrapper[4803]: I0127 21:51:28.129512 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"14e54f5f2044147629f221e836a4bccff18e16741627e70c6589df9a181fbaa8"} Jan 27 21:51:28 crc kubenswrapper[4803]: I0127 21:51:28.129596 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6affaa6466187aa6d51f61a554e6d6cef79742cf70bd37579888ec41dca6ae62"} Jan 27 21:51:28 crc kubenswrapper[4803]: E0127 21:51:28.130487 4803 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:51:28 crc kubenswrapper[4803]: I0127 21:51:28.130493 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:28 crc kubenswrapper[4803]: I0127 21:51:28.130962 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:28 crc kubenswrapper[4803]: I0127 21:51:28.309287 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:28 crc kubenswrapper[4803]: I0127 21:51:28.309555 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:28 crc kubenswrapper[4803]: E0127 21:51:28.822421 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Jan 27 21:51:30 crc kubenswrapper[4803]: E0127 21:51:30.424242 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Jan 27 21:51:33 crc kubenswrapper[4803]: E0127 21:51:33.625811 4803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="6.4s" Jan 27 21:51:35 crc kubenswrapper[4803]: I0127 21:51:35.311078 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:35 crc kubenswrapper[4803]: I0127 21:51:35.312369 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:35 crc kubenswrapper[4803]: I0127 21:51:35.312616 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:35 crc kubenswrapper[4803]: I0127 21:51:35.339759 4803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:35 crc kubenswrapper[4803]: I0127 21:51:35.339813 4803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:35 crc kubenswrapper[4803]: E0127 21:51:35.340700 4803 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:35 crc kubenswrapper[4803]: I0127 21:51:35.341779 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:35 crc kubenswrapper[4803]: W0127 21:51:35.376272 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ee6d0d4a24a71177db67ea680ec6e14dfb07f341acfed1df4e5806a1be320c5b WatchSource:0}: Error finding container ee6d0d4a24a71177db67ea680ec6e14dfb07f341acfed1df4e5806a1be320c5b: Status 404 returned error can't find the container with id ee6d0d4a24a71177db67ea680ec6e14dfb07f341acfed1df4e5806a1be320c5b Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.183382 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.183751 4803 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e" exitCode=1 Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.183842 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e"} Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.184732 4803 scope.go:117] "RemoveContainer" containerID="2b4173fa8a403e62c2dfa8af66ad7645d0624f4f7f339fc35d66f857ac9e572e" Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.185246 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.185583 4803 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.186033 4803 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="302b463532a4263a04b66f4b52c55b4a6a09bbcda1a0bbb1eb82b00a05c3a685" exitCode=0 Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.186068 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"302b463532a4263a04b66f4b52c55b4a6a09bbcda1a0bbb1eb82b00a05c3a685"} Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.186095 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ee6d0d4a24a71177db67ea680ec6e14dfb07f341acfed1df4e5806a1be320c5b"} Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.186344 4803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.186359 4803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.186592 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:36 crc kubenswrapper[4803]: E0127 21:51:36.186666 4803 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.186908 4803 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.187125 4803 status_manager.go:851] "Failed to get status for pod" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:36 crc kubenswrapper[4803]: I0127 21:51:36.187446 4803 status_manager.go:851] "Failed to get status for pod" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" pod="openshift-authentication/oauth-openshift-558db77b4-7x4wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7x4wr\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 27 21:51:37 crc kubenswrapper[4803]: I0127 21:51:37.195897 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 21:51:37 crc kubenswrapper[4803]: I0127 21:51:37.196271 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d1a2050952e08d00e14c0a7da2f59e99c3860c754a0514dbf809469a6d906e5e"} Jan 27 21:51:37 crc kubenswrapper[4803]: I0127 21:51:37.199866 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dfda471e15f214ac7f531e7af57dff26618305bfa1dbf8879996fac294cb8879"} Jan 27 21:51:37 crc kubenswrapper[4803]: I0127 21:51:37.199907 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d8d68ac5fa40273f95be37f1fbeb1a05d0e1d6fac23d9a3558c09d1230bd5faa"} Jan 27 21:51:37 crc kubenswrapper[4803]: I0127 21:51:37.199916 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a2eed547750873878e72aee7855a3bca8fc33448fb804ee0c10f459efa50d5b4"} Jan 27 21:51:37 crc kubenswrapper[4803]: I0127 21:51:37.199926 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4ef4e3017148385ba27a382f44a1ce5f1b92243821d69a397daa274f7d39a544"} Jan 27 21:51:37 crc kubenswrapper[4803]: I0127 21:51:37.560986 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:51:37 crc kubenswrapper[4803]: I0127 21:51:37.566719 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:51:38 crc kubenswrapper[4803]: I0127 21:51:38.208252 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"64679d3c54aa88be2ac42c99c9431fca81a28960463db49f212890856b9cb172"} Jan 27 21:51:38 crc kubenswrapper[4803]: I0127 21:51:38.208536 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:51:38 crc kubenswrapper[4803]: I0127 21:51:38.208781 4803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:38 crc kubenswrapper[4803]: I0127 21:51:38.208815 4803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:40 crc kubenswrapper[4803]: I0127 21:51:40.342321 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:40 crc kubenswrapper[4803]: I0127 21:51:40.342976 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:40 crc kubenswrapper[4803]: I0127 21:51:40.348212 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:43 crc kubenswrapper[4803]: I0127 21:51:43.218689 4803 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:43 crc kubenswrapper[4803]: I0127 21:51:43.236449 4803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:43 crc kubenswrapper[4803]: I0127 21:51:43.236484 4803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:43 crc kubenswrapper[4803]: I0127 21:51:43.236925 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:43 crc kubenswrapper[4803]: I0127 21:51:43.243598 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:43 crc kubenswrapper[4803]: I0127 21:51:43.245446 4803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="01a85126-11c9-4ed7-b779-35a291c10f40" Jan 27 21:51:44 crc kubenswrapper[4803]: I0127 21:51:44.244369 4803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:44 crc kubenswrapper[4803]: I0127 21:51:44.244411 4803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:45 crc kubenswrapper[4803]: I0127 21:51:45.250470 4803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:45 crc kubenswrapper[4803]: I0127 21:51:45.250825 4803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ff4d47d1-bf5a-4f61-bcf2-a08d47a52e02" Jan 27 21:51:48 crc kubenswrapper[4803]: I0127 21:51:48.325766 4803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="01a85126-11c9-4ed7-b779-35a291c10f40" Jan 27 21:51:50 crc kubenswrapper[4803]: I0127 21:51:50.356765 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 21:51:53 crc kubenswrapper[4803]: I0127 21:51:53.522122 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 21:51:54 crc kubenswrapper[4803]: I0127 21:51:54.168081 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 21:51:54 crc kubenswrapper[4803]: I0127 21:51:54.293640 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 21:51:54 crc kubenswrapper[4803]: I0127 21:51:54.373079 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 21:51:54 crc kubenswrapper[4803]: I0127 21:51:54.780776 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 21:51:54 crc kubenswrapper[4803]: I0127 21:51:54.798175 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 21:51:54 crc kubenswrapper[4803]: I0127 21:51:54.877266 4803 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 21:51:54 crc kubenswrapper[4803]: I0127 21:51:54.976339 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 21:51:55 crc kubenswrapper[4803]: I0127 21:51:55.265656 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 21:51:55 crc kubenswrapper[4803]: I0127 21:51:55.286689 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 21:51:55 crc kubenswrapper[4803]: I0127 21:51:55.338349 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 21:51:55 crc kubenswrapper[4803]: I0127 21:51:55.354826 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 21:51:55 crc kubenswrapper[4803]: I0127 21:51:55.459687 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 21:51:55 crc kubenswrapper[4803]: I0127 21:51:55.459913 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 21:51:55 crc kubenswrapper[4803]: I0127 21:51:55.462893 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 21:51:55 crc kubenswrapper[4803]: I0127 21:51:55.650128 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 21:51:55 crc kubenswrapper[4803]: I0127 21:51:55.660423 4803 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 21:51:55 crc kubenswrapper[4803]: I0127 21:51:55.749299 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 21:51:55 crc kubenswrapper[4803]: I0127 21:51:55.760005 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.093328 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.098648 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.191764 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.216911 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.289740 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.391612 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.480194 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.511303 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.546954 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.654311 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.782763 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.809575 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 21:51:56 crc kubenswrapper[4803]: I0127 21:51:56.822875 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.047151 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.062798 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.068484 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.092198 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.269027 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.278522 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.349281 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.368781 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.617808 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.649771 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.660876 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.788816 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.808518 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.948046 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 21:51:57 crc kubenswrapper[4803]: I0127 21:51:57.985839 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.058184 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.247385 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.264744 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.305349 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.319382 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.513274 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.525908 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.540272 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.636540 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.708099 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.769009 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.817711 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.878135 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.993713 4803 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.998636 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-7x4wr"] Jan 27 21:51:58 crc kubenswrapper[4803]: I0127 21:51:58.998716 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.003329 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.021980 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.029520 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.029493882 podStartE2EDuration="16.029493882s" podCreationTimestamp="2026-01-27 21:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:51:59.026050079 +0000 UTC m=+271.442071788" watchObservedRunningTime="2026-01-27 21:51:59.029493882 +0000 UTC m=+271.445515621" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.082708 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.133741 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.271258 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.342735 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.374266 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.378220 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.538452 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.541920 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.561028 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.582361 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.644581 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.703305 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.764111 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.809565 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.823821 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.874608 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.929874 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.934128 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 21:51:59 crc kubenswrapper[4803]: I0127 21:51:59.937500 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.027485 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.163880 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.181133 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.248195 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.315452 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" path="/var/lib/kubelet/pods/70091f5f-e06c-4cf3-8bc8-299f10207363/volumes" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.402294 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.489995 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.565904 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.715713 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.887601 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.945791 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.954978 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 21:52:00 crc kubenswrapper[4803]: I0127 21:52:00.964809 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.055690 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.131884 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.256525 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.289511 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.340339 4803 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.359352 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.371449 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.399070 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.452123 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.471206 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.472914 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.494880 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.511962 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.587346 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.740753 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.787345 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.864319 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.866244 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.867314 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.870606 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 21:52:01 crc kubenswrapper[4803]: I0127 21:52:01.949060 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.000100 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.010567 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.061151 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.254612 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.355840 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.408835 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.471302 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.520094 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.575048 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.595994 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.638497 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.843648 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.883317 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.931999 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.945002 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 21:52:02 crc kubenswrapper[4803]: I0127 21:52:02.967479 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.006149 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.006722 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.022797 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.094577 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.115836 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.135430 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.269996 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.353557 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.360628 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.379502 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.550863 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.561551 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.693570 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.713003 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.739196 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.848824 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.858180 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 21:52:03 crc kubenswrapper[4803]: I0127 21:52:03.893693 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.034363 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.089434 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.098833 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.107830 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.195946 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.368303 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.446374 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.473621 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.533429 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.542326 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.543952 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.618585 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.620242 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.624324 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.693650 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.694428 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.737831 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.762324 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.796446 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.812145 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.829528 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 21:52:04 crc kubenswrapper[4803]: I0127 21:52:04.893696 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.024563 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.028621 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.053970 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.121249 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.126939 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.198349 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.279338 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.287319 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.310571 4803 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.311192 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://14e54f5f2044147629f221e836a4bccff18e16741627e70c6589df9a181fbaa8" gracePeriod=5 Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.340370 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.341881 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.371645 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.387091 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.456352 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.601723 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.665257 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.854692 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.877543 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.885439 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.917609 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.934746 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 21:52:05 crc kubenswrapper[4803]: I0127 21:52:05.971754 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.003340 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.024499 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.115342 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.182204 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.304586 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.360302 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.521336 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-769fc69b77-cp7hp"] Jan 27 21:52:06 crc kubenswrapper[4803]: E0127 21:52:06.521558 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" containerName="oauth-openshift" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.521575 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" containerName="oauth-openshift" Jan 27 21:52:06 crc kubenswrapper[4803]: E0127 21:52:06.521593 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" containerName="installer" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.521602 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" containerName="installer" Jan 27 21:52:06 crc kubenswrapper[4803]: E0127 21:52:06.521610 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.521616 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.521695 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.521702 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="70091f5f-e06c-4cf3-8bc8-299f10207363" containerName="oauth-openshift" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.521716 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d32a4347-7d5e-4c36-ab79-2815fa7b5fbf" containerName="installer" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.522099 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.525986 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.527585 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.527980 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.528326 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.528925 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.530780 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.530817 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.531542 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.531591 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.537401 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.537655 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.538291 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.540969 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.542292 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.544626 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.544704 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.545134 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.564195 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.571611 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-769fc69b77-cp7hp"] Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.605606 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.624607 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.647760 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-router-certs\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.647838 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3446baa2-c061-41ff-9652-16734b5bb97a-audit-dir\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.647894 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.647924 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-template-login\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.647947 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-service-ca\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.647978 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-session\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.648020 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.648103 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-template-error\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.648134 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m79qf\" (UniqueName: \"kubernetes.io/projected/3446baa2-c061-41ff-9652-16734b5bb97a-kube-api-access-m79qf\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.648188 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-audit-policies\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.648297 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.648331 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.648368 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.648466 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.663209 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.665664 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.698484 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.749553 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.749618 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m79qf\" (UniqueName: \"kubernetes.io/projected/3446baa2-c061-41ff-9652-16734b5bb97a-kube-api-access-m79qf\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.749644 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-template-error\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.749709 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-audit-policies\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.749744 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.749769 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.749797 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.752144 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.752271 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-router-certs\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.752323 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3446baa2-c061-41ff-9652-16734b5bb97a-audit-dir\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.752352 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.752377 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-template-login\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.752400 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-service-ca\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.752429 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-session\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.752458 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.753392 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3446baa2-c061-41ff-9652-16734b5bb97a-audit-dir\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.751748 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.751956 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-audit-policies\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.754552 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-service-ca\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.757672 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-router-certs\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.759478 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.759805 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.760695 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.761602 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-template-error\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.762229 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-template-login\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.765179 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.769131 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.775968 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m79qf\" (UniqueName: \"kubernetes.io/projected/3446baa2-c061-41ff-9652-16734b5bb97a-kube-api-access-m79qf\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.776408 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3446baa2-c061-41ff-9652-16734b5bb97a-v4-0-config-system-session\") pod \"oauth-openshift-769fc69b77-cp7hp\" (UID: \"3446baa2-c061-41ff-9652-16734b5bb97a\") " pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.789621 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.822521 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.905649 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.933682 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.937423 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 21:52:06 crc kubenswrapper[4803]: I0127 21:52:06.965315 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.036233 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.042283 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.212143 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.222351 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.301627 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-769fc69b77-cp7hp"] Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.367787 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.376907 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" event={"ID":"3446baa2-c061-41ff-9652-16734b5bb97a","Type":"ContainerStarted","Data":"ee15d4bbac6c6c3322355a0f80803f4658dc577f729449c09a4be1b99a9aabf2"} Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.576040 4803 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.582341 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.779817 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.936721 4803 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 21:52:07 crc kubenswrapper[4803]: I0127 21:52:07.938559 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.046251 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.049034 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.169089 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.185509 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.349254 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.385117 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" event={"ID":"3446baa2-c061-41ff-9652-16734b5bb97a","Type":"ContainerStarted","Data":"2858e5cb08be19324a1c5c32c6c51bfafa2bf9f9357bbbe587d92af80f4560ee"} Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.386466 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.393792 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.405902 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" podStartSLOduration=71.405879618 podStartE2EDuration="1m11.405879618s" podCreationTimestamp="2026-01-27 21:50:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:52:08.404275985 +0000 UTC m=+280.820297714" watchObservedRunningTime="2026-01-27 21:52:08.405879618 +0000 UTC m=+280.821901357" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.517597 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.527530 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.767201 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.809453 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 21:52:08 crc kubenswrapper[4803]: I0127 21:52:08.861232 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 21:52:09 crc kubenswrapper[4803]: I0127 21:52:09.614959 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 21:52:09 crc kubenswrapper[4803]: I0127 21:52:09.739711 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 21:52:10 crc kubenswrapper[4803]: I0127 21:52:10.040474 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 21:52:10 crc kubenswrapper[4803]: I0127 21:52:10.400441 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 21:52:10 crc kubenswrapper[4803]: I0127 21:52:10.400525 4803 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="14e54f5f2044147629f221e836a4bccff18e16741627e70c6589df9a181fbaa8" exitCode=137 Jan 27 21:52:10 crc kubenswrapper[4803]: I0127 21:52:10.418260 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 21:52:10 crc kubenswrapper[4803]: I0127 21:52:10.909933 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 21:52:10 crc kubenswrapper[4803]: I0127 21:52:10.910040 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.013643 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.014265 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.013940 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.014304 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.014368 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.014415 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.014441 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.014474 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.014630 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.014790 4803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.014822 4803 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.014885 4803 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.014908 4803 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.026901 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.105509 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.116826 4803 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.408589 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.408660 4803 scope.go:117] "RemoveContainer" containerID="14e54f5f2044147629f221e836a4bccff18e16741627e70c6589df9a181fbaa8" Jan 27 21:52:11 crc kubenswrapper[4803]: I0127 21:52:11.408734 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 21:52:12 crc kubenswrapper[4803]: I0127 21:52:12.321589 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 21:52:12 crc kubenswrapper[4803]: I0127 21:52:12.745183 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 21:52:25 crc kubenswrapper[4803]: I0127 21:52:25.502998 4803 generic.go:334] "Generic (PLEG): container finished" podID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" containerID="69e7c83be0df564cb9724449030dd860fee239fa3e3d4f482149da324626e2cc" exitCode=0 Jan 27 21:52:25 crc kubenswrapper[4803]: I0127 21:52:25.503103 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" event={"ID":"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0","Type":"ContainerDied","Data":"69e7c83be0df564cb9724449030dd860fee239fa3e3d4f482149da324626e2cc"} Jan 27 21:52:25 crc kubenswrapper[4803]: I0127 21:52:25.503980 4803 scope.go:117] "RemoveContainer" containerID="69e7c83be0df564cb9724449030dd860fee239fa3e3d4f482149da324626e2cc" Jan 27 21:52:26 crc kubenswrapper[4803]: I0127 21:52:26.510341 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" event={"ID":"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0","Type":"ContainerStarted","Data":"12def603a43e4c904acbe458342b78366ea296dbf65f1eb128344ebd091f0bcf"} Jan 27 21:52:26 crc kubenswrapper[4803]: I0127 21:52:26.511005 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:52:26 crc kubenswrapper[4803]: I0127 21:52:26.515032 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:52:28 crc kubenswrapper[4803]: I0127 21:52:28.050929 4803 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.118805 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-74666"] Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.119664 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" podUID="6e8228ba-8397-4400-b30f-07dcf24d6fb5" containerName="controller-manager" containerID="cri-o://7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca" gracePeriod=30 Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.206087 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq"] Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.206478 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" podUID="fc396037-51ea-4671-bc9d-821a5505ace9" containerName="route-controller-manager" containerID="cri-o://23c065fd4f4c1811f7bd7fc7b50356ca0e625f9fde9a801955a9fc132f2d3e28" gracePeriod=30 Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.480070 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.555200 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8228ba-8397-4400-b30f-07dcf24d6fb5-serving-cert\") pod \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.555653 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-config\") pod \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.555709 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-proxy-ca-bundles\") pod \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.555758 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-client-ca\") pod \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.555828 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbfsf\" (UniqueName: \"kubernetes.io/projected/6e8228ba-8397-4400-b30f-07dcf24d6fb5-kube-api-access-kbfsf\") pod \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\" (UID: \"6e8228ba-8397-4400-b30f-07dcf24d6fb5\") " Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.556369 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-config" (OuterVolumeSpecName: "config") pod "6e8228ba-8397-4400-b30f-07dcf24d6fb5" (UID: "6e8228ba-8397-4400-b30f-07dcf24d6fb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.556665 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-client-ca" (OuterVolumeSpecName: "client-ca") pod "6e8228ba-8397-4400-b30f-07dcf24d6fb5" (UID: "6e8228ba-8397-4400-b30f-07dcf24d6fb5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.556998 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6e8228ba-8397-4400-b30f-07dcf24d6fb5" (UID: "6e8228ba-8397-4400-b30f-07dcf24d6fb5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.561335 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e8228ba-8397-4400-b30f-07dcf24d6fb5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6e8228ba-8397-4400-b30f-07dcf24d6fb5" (UID: "6e8228ba-8397-4400-b30f-07dcf24d6fb5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.561368 4803 generic.go:334] "Generic (PLEG): container finished" podID="fc396037-51ea-4671-bc9d-821a5505ace9" containerID="23c065fd4f4c1811f7bd7fc7b50356ca0e625f9fde9a801955a9fc132f2d3e28" exitCode=0 Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.561448 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" event={"ID":"fc396037-51ea-4671-bc9d-821a5505ace9","Type":"ContainerDied","Data":"23c065fd4f4c1811f7bd7fc7b50356ca0e625f9fde9a801955a9fc132f2d3e28"} Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.561682 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8228ba-8397-4400-b30f-07dcf24d6fb5-kube-api-access-kbfsf" (OuterVolumeSpecName: "kube-api-access-kbfsf") pod "6e8228ba-8397-4400-b30f-07dcf24d6fb5" (UID: "6e8228ba-8397-4400-b30f-07dcf24d6fb5"). InnerVolumeSpecName "kube-api-access-kbfsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.563200 4803 generic.go:334] "Generic (PLEG): container finished" podID="6e8228ba-8397-4400-b30f-07dcf24d6fb5" containerID="7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca" exitCode=0 Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.563240 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" event={"ID":"6e8228ba-8397-4400-b30f-07dcf24d6fb5","Type":"ContainerDied","Data":"7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca"} Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.563273 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" event={"ID":"6e8228ba-8397-4400-b30f-07dcf24d6fb5","Type":"ContainerDied","Data":"4ea3024473ebf837bfa276f58a52d56561b96cdd8b718d8601b802a8609a2795"} Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.563294 4803 scope.go:117] "RemoveContainer" containerID="7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.563415 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-74666" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.575428 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.584327 4803 scope.go:117] "RemoveContainer" containerID="7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca" Jan 27 21:52:34 crc kubenswrapper[4803]: E0127 21:52:34.585095 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca\": container with ID starting with 7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca not found: ID does not exist" containerID="7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.585127 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca"} err="failed to get container status \"7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca\": rpc error: code = NotFound desc = could not find container \"7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca\": container with ID starting with 7ecc611f216db45241ca14ce3e78e16a2e601938dbcfcc7b7e0a176f702207ca not found: ID does not exist" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.614474 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-74666"] Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.618787 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-74666"] Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.656956 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-client-ca\") pod \"fc396037-51ea-4671-bc9d-821a5505ace9\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.657052 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc396037-51ea-4671-bc9d-821a5505ace9-serving-cert\") pod \"fc396037-51ea-4671-bc9d-821a5505ace9\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.657131 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-config\") pod \"fc396037-51ea-4671-bc9d-821a5505ace9\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.657258 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x984s\" (UniqueName: \"kubernetes.io/projected/fc396037-51ea-4671-bc9d-821a5505ace9-kube-api-access-x984s\") pod \"fc396037-51ea-4671-bc9d-821a5505ace9\" (UID: \"fc396037-51ea-4671-bc9d-821a5505ace9\") " Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.657532 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbfsf\" (UniqueName: \"kubernetes.io/projected/6e8228ba-8397-4400-b30f-07dcf24d6fb5-kube-api-access-kbfsf\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.657557 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8228ba-8397-4400-b30f-07dcf24d6fb5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.657569 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.657583 4803 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.657594 4803 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e8228ba-8397-4400-b30f-07dcf24d6fb5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.657930 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-client-ca" (OuterVolumeSpecName: "client-ca") pod "fc396037-51ea-4671-bc9d-821a5505ace9" (UID: "fc396037-51ea-4671-bc9d-821a5505ace9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.658085 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-config" (OuterVolumeSpecName: "config") pod "fc396037-51ea-4671-bc9d-821a5505ace9" (UID: "fc396037-51ea-4671-bc9d-821a5505ace9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.661362 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc396037-51ea-4671-bc9d-821a5505ace9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fc396037-51ea-4671-bc9d-821a5505ace9" (UID: "fc396037-51ea-4671-bc9d-821a5505ace9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.661555 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc396037-51ea-4671-bc9d-821a5505ace9-kube-api-access-x984s" (OuterVolumeSpecName: "kube-api-access-x984s") pod "fc396037-51ea-4671-bc9d-821a5505ace9" (UID: "fc396037-51ea-4671-bc9d-821a5505ace9"). InnerVolumeSpecName "kube-api-access-x984s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.758934 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x984s\" (UniqueName: \"kubernetes.io/projected/fc396037-51ea-4671-bc9d-821a5505ace9-kube-api-access-x984s\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.759009 4803 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.759026 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc396037-51ea-4671-bc9d-821a5505ace9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:34 crc kubenswrapper[4803]: I0127 21:52:34.759038 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc396037-51ea-4671-bc9d-821a5505ace9-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.545588 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv"] Jan 27 21:52:35 crc kubenswrapper[4803]: E0127 21:52:35.545916 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc396037-51ea-4671-bc9d-821a5505ace9" containerName="route-controller-manager" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.545937 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc396037-51ea-4671-bc9d-821a5505ace9" containerName="route-controller-manager" Jan 27 21:52:35 crc kubenswrapper[4803]: E0127 21:52:35.545975 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8228ba-8397-4400-b30f-07dcf24d6fb5" containerName="controller-manager" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.545986 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8228ba-8397-4400-b30f-07dcf24d6fb5" containerName="controller-manager" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.546144 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc396037-51ea-4671-bc9d-821a5505ace9" containerName="route-controller-manager" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.546165 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8228ba-8397-4400-b30f-07dcf24d6fb5" containerName="controller-manager" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.546725 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.549912 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.550189 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.550350 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.554650 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.559693 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2"] Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.561232 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.563219 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.567293 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.567923 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2"] Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.571733 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv"] Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.572455 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" event={"ID":"fc396037-51ea-4671-bc9d-821a5505ace9","Type":"ContainerDied","Data":"da765f13f65a5a1d0a18f4f307406e304012ee649f152dab1c47752f93b77130"} Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.572501 4803 scope.go:117] "RemoveContainer" containerID="23c065fd4f4c1811f7bd7fc7b50356ca0e625f9fde9a801955a9fc132f2d3e28" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.572612 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.583690 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.623892 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq"] Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.629380 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lmjtq"] Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.670803 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-config\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.670890 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-config\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.670926 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccbst\" (UniqueName: \"kubernetes.io/projected/54bb03e2-83a7-46e0-bda4-453f7c0b622c-kube-api-access-ccbst\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.670957 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-serving-cert\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.670992 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-client-ca\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.671014 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-client-ca\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.671049 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blffg\" (UniqueName: \"kubernetes.io/projected/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-kube-api-access-blffg\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.671094 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54bb03e2-83a7-46e0-bda4-453f7c0b622c-serving-cert\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.671123 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-proxy-ca-bundles\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.772447 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-client-ca\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.772792 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-client-ca\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.772928 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blffg\" (UniqueName: \"kubernetes.io/projected/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-kube-api-access-blffg\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.773040 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54bb03e2-83a7-46e0-bda4-453f7c0b622c-serving-cert\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.773134 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-proxy-ca-bundles\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.773233 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-config\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.773326 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-config\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.773446 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccbst\" (UniqueName: \"kubernetes.io/projected/54bb03e2-83a7-46e0-bda4-453f7c0b622c-kube-api-access-ccbst\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.773532 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-serving-cert\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.773490 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-client-ca\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.774255 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-client-ca\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.774562 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-config\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.775456 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-config\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.775540 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-proxy-ca-bundles\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.777436 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-serving-cert\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.778372 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54bb03e2-83a7-46e0-bda4-453f7c0b622c-serving-cert\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.798408 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccbst\" (UniqueName: \"kubernetes.io/projected/54bb03e2-83a7-46e0-bda4-453f7c0b622c-kube-api-access-ccbst\") pod \"route-controller-manager-7b8ffd47fb-8d5x2\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.812210 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blffg\" (UniqueName: \"kubernetes.io/projected/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-kube-api-access-blffg\") pod \"controller-manager-5b86ff6bc6-pw7mv\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.874581 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:35 crc kubenswrapper[4803]: I0127 21:52:35.915187 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.104995 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv"] Jan 27 21:52:36 crc kubenswrapper[4803]: W0127 21:52:36.112151 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1871000_c9b6_488a_b8e2_5ad7bf5c3a91.slice/crio-073673fdf3bf3dea7ad303299988b6bfa6930691269be0b62388e0262ac75fc7 WatchSource:0}: Error finding container 073673fdf3bf3dea7ad303299988b6bfa6930691269be0b62388e0262ac75fc7: Status 404 returned error can't find the container with id 073673fdf3bf3dea7ad303299988b6bfa6930691269be0b62388e0262ac75fc7 Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.151743 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2"] Jan 27 21:52:36 crc kubenswrapper[4803]: W0127 21:52:36.164506 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54bb03e2_83a7_46e0_bda4_453f7c0b622c.slice/crio-a4e07a6668bbb26d8073989e8047c01c63390896930abc220eb54249093a22dc WatchSource:0}: Error finding container a4e07a6668bbb26d8073989e8047c01c63390896930abc220eb54249093a22dc: Status 404 returned error can't find the container with id a4e07a6668bbb26d8073989e8047c01c63390896930abc220eb54249093a22dc Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.314447 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e8228ba-8397-4400-b30f-07dcf24d6fb5" path="/var/lib/kubelet/pods/6e8228ba-8397-4400-b30f-07dcf24d6fb5/volumes" Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.315307 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc396037-51ea-4671-bc9d-821a5505ace9" path="/var/lib/kubelet/pods/fc396037-51ea-4671-bc9d-821a5505ace9/volumes" Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.586261 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" event={"ID":"54bb03e2-83a7-46e0-bda4-453f7c0b622c","Type":"ContainerStarted","Data":"ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f"} Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.586310 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" event={"ID":"54bb03e2-83a7-46e0-bda4-453f7c0b622c","Type":"ContainerStarted","Data":"a4e07a6668bbb26d8073989e8047c01c63390896930abc220eb54249093a22dc"} Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.586670 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.587834 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" event={"ID":"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91","Type":"ContainerStarted","Data":"7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0"} Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.587955 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" event={"ID":"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91","Type":"ContainerStarted","Data":"073673fdf3bf3dea7ad303299988b6bfa6930691269be0b62388e0262ac75fc7"} Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.588181 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.597480 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.601762 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" podStartSLOduration=2.6017459880000002 podStartE2EDuration="2.601745988s" podCreationTimestamp="2026-01-27 21:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:52:36.601584054 +0000 UTC m=+309.017605753" watchObservedRunningTime="2026-01-27 21:52:36.601745988 +0000 UTC m=+309.017767687" Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.629695 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" podStartSLOduration=2.629676039 podStartE2EDuration="2.629676039s" podCreationTimestamp="2026-01-27 21:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:52:36.628447476 +0000 UTC m=+309.044469185" watchObservedRunningTime="2026-01-27 21:52:36.629676039 +0000 UTC m=+309.045697748" Jan 27 21:52:36 crc kubenswrapper[4803]: I0127 21:52:36.701718 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:39 crc kubenswrapper[4803]: I0127 21:52:39.108499 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv"] Jan 27 21:52:39 crc kubenswrapper[4803]: I0127 21:52:39.130507 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2"] Jan 27 21:52:39 crc kubenswrapper[4803]: I0127 21:52:39.603297 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" podUID="e1871000-c9b6-488a-b8e2-5ad7bf5c3a91" containerName="controller-manager" containerID="cri-o://7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0" gracePeriod=30 Jan 27 21:52:39 crc kubenswrapper[4803]: I0127 21:52:39.603630 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" podUID="54bb03e2-83a7-46e0-bda4-453f7c0b622c" containerName="route-controller-manager" containerID="cri-o://ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f" gracePeriod=30 Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.036538 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.041548 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.127710 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccbst\" (UniqueName: \"kubernetes.io/projected/54bb03e2-83a7-46e0-bda4-453f7c0b622c-kube-api-access-ccbst\") pod \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.127765 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54bb03e2-83a7-46e0-bda4-453f7c0b622c-serving-cert\") pod \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.127863 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-config\") pod \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.127898 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-client-ca\") pod \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.127931 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-config\") pod \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\" (UID: \"54bb03e2-83a7-46e0-bda4-453f7c0b622c\") " Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.127963 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-proxy-ca-bundles\") pod \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.128005 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-client-ca\") pod \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.128048 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blffg\" (UniqueName: \"kubernetes.io/projected/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-kube-api-access-blffg\") pod \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.128073 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-serving-cert\") pod \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\" (UID: \"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91\") " Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.128600 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-config" (OuterVolumeSpecName: "config") pod "54bb03e2-83a7-46e0-bda4-453f7c0b622c" (UID: "54bb03e2-83a7-46e0-bda4-453f7c0b622c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.128639 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-client-ca" (OuterVolumeSpecName: "client-ca") pod "54bb03e2-83a7-46e0-bda4-453f7c0b622c" (UID: "54bb03e2-83a7-46e0-bda4-453f7c0b622c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.128697 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-client-ca" (OuterVolumeSpecName: "client-ca") pod "e1871000-c9b6-488a-b8e2-5ad7bf5c3a91" (UID: "e1871000-c9b6-488a-b8e2-5ad7bf5c3a91"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.128726 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-config" (OuterVolumeSpecName: "config") pod "e1871000-c9b6-488a-b8e2-5ad7bf5c3a91" (UID: "e1871000-c9b6-488a-b8e2-5ad7bf5c3a91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.130972 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e1871000-c9b6-488a-b8e2-5ad7bf5c3a91" (UID: "e1871000-c9b6-488a-b8e2-5ad7bf5c3a91"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.132974 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54bb03e2-83a7-46e0-bda4-453f7c0b622c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "54bb03e2-83a7-46e0-bda4-453f7c0b622c" (UID: "54bb03e2-83a7-46e0-bda4-453f7c0b622c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.133394 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-kube-api-access-blffg" (OuterVolumeSpecName: "kube-api-access-blffg") pod "e1871000-c9b6-488a-b8e2-5ad7bf5c3a91" (UID: "e1871000-c9b6-488a-b8e2-5ad7bf5c3a91"). InnerVolumeSpecName "kube-api-access-blffg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.133431 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e1871000-c9b6-488a-b8e2-5ad7bf5c3a91" (UID: "e1871000-c9b6-488a-b8e2-5ad7bf5c3a91"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.133991 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54bb03e2-83a7-46e0-bda4-453f7c0b622c-kube-api-access-ccbst" (OuterVolumeSpecName: "kube-api-access-ccbst") pod "54bb03e2-83a7-46e0-bda4-453f7c0b622c" (UID: "54bb03e2-83a7-46e0-bda4-453f7c0b622c"). InnerVolumeSpecName "kube-api-access-ccbst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.229736 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.229773 4803 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.229783 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bb03e2-83a7-46e0-bda4-453f7c0b622c-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.229791 4803 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.229802 4803 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.229810 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blffg\" (UniqueName: \"kubernetes.io/projected/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-kube-api-access-blffg\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.229819 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.229827 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccbst\" (UniqueName: \"kubernetes.io/projected/54bb03e2-83a7-46e0-bda4-453f7c0b622c-kube-api-access-ccbst\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.229836 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54bb03e2-83a7-46e0-bda4-453f7c0b622c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.544689 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6"] Jan 27 21:52:40 crc kubenswrapper[4803]: E0127 21:52:40.545212 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1871000-c9b6-488a-b8e2-5ad7bf5c3a91" containerName="controller-manager" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.545226 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1871000-c9b6-488a-b8e2-5ad7bf5c3a91" containerName="controller-manager" Jan 27 21:52:40 crc kubenswrapper[4803]: E0127 21:52:40.545237 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bb03e2-83a7-46e0-bda4-453f7c0b622c" containerName="route-controller-manager" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.545244 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bb03e2-83a7-46e0-bda4-453f7c0b622c" containerName="route-controller-manager" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.545343 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="54bb03e2-83a7-46e0-bda4-453f7c0b622c" containerName="route-controller-manager" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.545358 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1871000-c9b6-488a-b8e2-5ad7bf5c3a91" containerName="controller-manager" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.545786 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.550924 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8"] Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.551655 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.557030 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6"] Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.563632 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8"] Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.608827 4803 generic.go:334] "Generic (PLEG): container finished" podID="54bb03e2-83a7-46e0-bda4-453f7c0b622c" containerID="ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f" exitCode=0 Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.608917 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.608917 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" event={"ID":"54bb03e2-83a7-46e0-bda4-453f7c0b622c","Type":"ContainerDied","Data":"ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f"} Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.608953 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2" event={"ID":"54bb03e2-83a7-46e0-bda4-453f7c0b622c","Type":"ContainerDied","Data":"a4e07a6668bbb26d8073989e8047c01c63390896930abc220eb54249093a22dc"} Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.608973 4803 scope.go:117] "RemoveContainer" containerID="ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.612383 4803 generic.go:334] "Generic (PLEG): container finished" podID="e1871000-c9b6-488a-b8e2-5ad7bf5c3a91" containerID="7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0" exitCode=0 Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.612436 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" event={"ID":"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91","Type":"ContainerDied","Data":"7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0"} Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.612468 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" event={"ID":"e1871000-c9b6-488a-b8e2-5ad7bf5c3a91","Type":"ContainerDied","Data":"073673fdf3bf3dea7ad303299988b6bfa6930691269be0b62388e0262ac75fc7"} Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.612583 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.632904 4803 scope.go:117] "RemoveContainer" containerID="ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f" Jan 27 21:52:40 crc kubenswrapper[4803]: E0127 21:52:40.633372 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f\": container with ID starting with ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f not found: ID does not exist" containerID="ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.633409 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f"} err="failed to get container status \"ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f\": rpc error: code = NotFound desc = could not find container \"ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f\": container with ID starting with ab09ff1f093f92d1416c9afedbd5d1c3b6105f9ef1d30ef12d04bf34c4b5cf2f not found: ID does not exist" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.633429 4803 scope.go:117] "RemoveContainer" containerID="7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.635032 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-proxy-ca-bundles\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.635065 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fkg\" (UniqueName: \"kubernetes.io/projected/160a3ef9-7414-455b-b63f-ae53a7f50e05-kube-api-access-q4fkg\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.635094 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-client-ca\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.635118 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/160a3ef9-7414-455b-b63f-ae53a7f50e05-serving-cert\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.635145 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22tnv\" (UniqueName: \"kubernetes.io/projected/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-kube-api-access-22tnv\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.635164 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-config\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.635373 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-serving-cert\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.635450 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-client-ca\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.635473 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-config\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.636587 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2"] Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.643045 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8ffd47fb-8d5x2"] Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.648110 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv"] Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.653334 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b86ff6bc6-pw7mv"] Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.653894 4803 scope.go:117] "RemoveContainer" containerID="7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0" Jan 27 21:52:40 crc kubenswrapper[4803]: E0127 21:52:40.654429 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0\": container with ID starting with 7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0 not found: ID does not exist" containerID="7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.654456 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0"} err="failed to get container status \"7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0\": rpc error: code = NotFound desc = could not find container \"7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0\": container with ID starting with 7aab82620fb424cf415ab31e834bd62f29513a152a151f964d1ca299be56bcc0 not found: ID does not exist" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.736608 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-client-ca\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.736653 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-config\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.736693 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-proxy-ca-bundles\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.736711 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4fkg\" (UniqueName: \"kubernetes.io/projected/160a3ef9-7414-455b-b63f-ae53a7f50e05-kube-api-access-q4fkg\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.736729 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-client-ca\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.736746 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/160a3ef9-7414-455b-b63f-ae53a7f50e05-serving-cert\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.736767 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-config\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.736781 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22tnv\" (UniqueName: \"kubernetes.io/projected/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-kube-api-access-22tnv\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.736811 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-serving-cert\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.738360 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-client-ca\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.738748 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-proxy-ca-bundles\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.738785 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-client-ca\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.739180 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-config\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.739297 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-config\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.741306 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-serving-cert\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.741817 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/160a3ef9-7414-455b-b63f-ae53a7f50e05-serving-cert\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.757623 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22tnv\" (UniqueName: \"kubernetes.io/projected/7cd4933d-5334-4da7-8a38-e0f42c85bfbe-kube-api-access-22tnv\") pod \"route-controller-manager-c4b5fc665-k52v8\" (UID: \"7cd4933d-5334-4da7-8a38-e0f42c85bfbe\") " pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.764524 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4fkg\" (UniqueName: \"kubernetes.io/projected/160a3ef9-7414-455b-b63f-ae53a7f50e05-kube-api-access-q4fkg\") pod \"controller-manager-bbbbdff7c-vcjn6\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.865474 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:40 crc kubenswrapper[4803]: I0127 21:52:40.875844 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.046042 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6"] Jan 27 21:52:41 crc kubenswrapper[4803]: W0127 21:52:41.055398 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod160a3ef9_7414_455b_b63f_ae53a7f50e05.slice/crio-b539c5fc51654e6e474b479fe2db1caeecf5adc2a6c113714cefcdb71e380657 WatchSource:0}: Error finding container b539c5fc51654e6e474b479fe2db1caeecf5adc2a6c113714cefcdb71e380657: Status 404 returned error can't find the container with id b539c5fc51654e6e474b479fe2db1caeecf5adc2a6c113714cefcdb71e380657 Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.080817 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8"] Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.618773 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" event={"ID":"7cd4933d-5334-4da7-8a38-e0f42c85bfbe","Type":"ContainerStarted","Data":"adf436f517d444c036e20dd4e0eb30efbe4e95022d94c8064e2f9cbfeeb56f1b"} Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.618814 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" event={"ID":"7cd4933d-5334-4da7-8a38-e0f42c85bfbe","Type":"ContainerStarted","Data":"f511968cfee874172964d5ba8a44a82cbf9ce513ce49cbc6e76b62b87b5cd9fe"} Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.620200 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.621176 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" event={"ID":"160a3ef9-7414-455b-b63f-ae53a7f50e05","Type":"ContainerStarted","Data":"2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8"} Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.621201 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" event={"ID":"160a3ef9-7414-455b-b63f-ae53a7f50e05","Type":"ContainerStarted","Data":"b539c5fc51654e6e474b479fe2db1caeecf5adc2a6c113714cefcdb71e380657"} Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.621691 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.625762 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.627141 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.656277 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podStartSLOduration=2.656263764 podStartE2EDuration="2.656263764s" podCreationTimestamp="2026-01-27 21:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:52:41.642029757 +0000 UTC m=+314.058051466" watchObservedRunningTime="2026-01-27 21:52:41.656263764 +0000 UTC m=+314.072285463" Jan 27 21:52:41 crc kubenswrapper[4803]: I0127 21:52:41.679179 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" podStartSLOduration=2.6791623209999997 podStartE2EDuration="2.679162321s" podCreationTimestamp="2026-01-27 21:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:52:41.677786685 +0000 UTC m=+314.093808384" watchObservedRunningTime="2026-01-27 21:52:41.679162321 +0000 UTC m=+314.095184020" Jan 27 21:52:42 crc kubenswrapper[4803]: I0127 21:52:42.316159 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54bb03e2-83a7-46e0-bda4-453f7c0b622c" path="/var/lib/kubelet/pods/54bb03e2-83a7-46e0-bda4-453f7c0b622c/volumes" Jan 27 21:52:42 crc kubenswrapper[4803]: I0127 21:52:42.316690 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1871000-c9b6-488a-b8e2-5ad7bf5c3a91" path="/var/lib/kubelet/pods/e1871000-c9b6-488a-b8e2-5ad7bf5c3a91/volumes" Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.857737 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qn26k"] Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.859886 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.875059 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qn26k"] Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.927333 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-registry-certificates\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.927438 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.927480 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-bound-sa-token\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.927547 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-registry-tls\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.927573 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.927615 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lvrn\" (UniqueName: \"kubernetes.io/projected/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-kube-api-access-4lvrn\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.927646 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-trusted-ca\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.927678 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:47 crc kubenswrapper[4803]: I0127 21:52:47.958620 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.028783 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-registry-tls\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.028901 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.028955 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lvrn\" (UniqueName: \"kubernetes.io/projected/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-kube-api-access-4lvrn\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.028997 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-trusted-ca\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.029037 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.029101 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-registry-certificates\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.029146 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-bound-sa-token\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.029445 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.030342 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-trusted-ca\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.030424 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-registry-certificates\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.037769 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-registry-tls\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.041505 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.046770 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lvrn\" (UniqueName: \"kubernetes.io/projected/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-kube-api-access-4lvrn\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.055629 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5087ca2-7fa8-4a3e-b1bb-25335a4ed927-bound-sa-token\") pod \"image-registry-66df7c8f76-qn26k\" (UID: \"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927\") " pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.177353 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.631602 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qn26k"] Jan 27 21:52:48 crc kubenswrapper[4803]: W0127 21:52:48.641151 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5087ca2_7fa8_4a3e_b1bb_25335a4ed927.slice/crio-236bfd683cf2979d2404e78765bbaa6aca82d963b3a7a0bcbc45f7df4895190e WatchSource:0}: Error finding container 236bfd683cf2979d2404e78765bbaa6aca82d963b3a7a0bcbc45f7df4895190e: Status 404 returned error can't find the container with id 236bfd683cf2979d2404e78765bbaa6aca82d963b3a7a0bcbc45f7df4895190e Jan 27 21:52:48 crc kubenswrapper[4803]: I0127 21:52:48.657719 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" event={"ID":"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927","Type":"ContainerStarted","Data":"236bfd683cf2979d2404e78765bbaa6aca82d963b3a7a0bcbc45f7df4895190e"} Jan 27 21:52:49 crc kubenswrapper[4803]: I0127 21:52:49.666694 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" event={"ID":"c5087ca2-7fa8-4a3e-b1bb-25335a4ed927","Type":"ContainerStarted","Data":"34fd806e9a6e548b302132a11e8ce5dcf7ec030234aa8ffdc1cdbe42919e1e45"} Jan 27 21:52:49 crc kubenswrapper[4803]: I0127 21:52:49.667122 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:52:49 crc kubenswrapper[4803]: I0127 21:52:49.692152 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" podStartSLOduration=2.692122187 podStartE2EDuration="2.692122187s" podCreationTimestamp="2026-01-27 21:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:52:49.68731195 +0000 UTC m=+322.103333669" watchObservedRunningTime="2026-01-27 21:52:49.692122187 +0000 UTC m=+322.108143916" Jan 27 21:53:08 crc kubenswrapper[4803]: I0127 21:53:08.189603 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" Jan 27 21:53:08 crc kubenswrapper[4803]: I0127 21:53:08.272922 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbljw"] Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.101621 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6"] Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.103020 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" podUID="160a3ef9-7414-455b-b63f-ae53a7f50e05" containerName="controller-manager" containerID="cri-o://2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8" gracePeriod=30 Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.565032 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.589948 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4fkg\" (UniqueName: \"kubernetes.io/projected/160a3ef9-7414-455b-b63f-ae53a7f50e05-kube-api-access-q4fkg\") pod \"160a3ef9-7414-455b-b63f-ae53a7f50e05\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.590012 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/160a3ef9-7414-455b-b63f-ae53a7f50e05-serving-cert\") pod \"160a3ef9-7414-455b-b63f-ae53a7f50e05\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.590100 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-config\") pod \"160a3ef9-7414-455b-b63f-ae53a7f50e05\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.590127 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-client-ca\") pod \"160a3ef9-7414-455b-b63f-ae53a7f50e05\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.590186 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-proxy-ca-bundles\") pod \"160a3ef9-7414-455b-b63f-ae53a7f50e05\" (UID: \"160a3ef9-7414-455b-b63f-ae53a7f50e05\") " Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.591519 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-client-ca" (OuterVolumeSpecName: "client-ca") pod "160a3ef9-7414-455b-b63f-ae53a7f50e05" (UID: "160a3ef9-7414-455b-b63f-ae53a7f50e05"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.591544 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "160a3ef9-7414-455b-b63f-ae53a7f50e05" (UID: "160a3ef9-7414-455b-b63f-ae53a7f50e05"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.591533 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-config" (OuterVolumeSpecName: "config") pod "160a3ef9-7414-455b-b63f-ae53a7f50e05" (UID: "160a3ef9-7414-455b-b63f-ae53a7f50e05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.595609 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160a3ef9-7414-455b-b63f-ae53a7f50e05-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "160a3ef9-7414-455b-b63f-ae53a7f50e05" (UID: "160a3ef9-7414-455b-b63f-ae53a7f50e05"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.596045 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160a3ef9-7414-455b-b63f-ae53a7f50e05-kube-api-access-q4fkg" (OuterVolumeSpecName: "kube-api-access-q4fkg") pod "160a3ef9-7414-455b-b63f-ae53a7f50e05" (UID: "160a3ef9-7414-455b-b63f-ae53a7f50e05"). InnerVolumeSpecName "kube-api-access-q4fkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.691945 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.691980 4803 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.691994 4803 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/160a3ef9-7414-455b-b63f-ae53a7f50e05-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.692008 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4fkg\" (UniqueName: \"kubernetes.io/projected/160a3ef9-7414-455b-b63f-ae53a7f50e05-kube-api-access-q4fkg\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.692020 4803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/160a3ef9-7414-455b-b63f-ae53a7f50e05-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.805927 4803 generic.go:334] "Generic (PLEG): container finished" podID="160a3ef9-7414-455b-b63f-ae53a7f50e05" containerID="2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8" exitCode=0 Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.805993 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.806006 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" event={"ID":"160a3ef9-7414-455b-b63f-ae53a7f50e05","Type":"ContainerDied","Data":"2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8"} Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.806081 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6" event={"ID":"160a3ef9-7414-455b-b63f-ae53a7f50e05","Type":"ContainerDied","Data":"b539c5fc51654e6e474b479fe2db1caeecf5adc2a6c113714cefcdb71e380657"} Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.806112 4803 scope.go:117] "RemoveContainer" containerID="2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.825883 4803 scope.go:117] "RemoveContainer" containerID="2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8" Jan 27 21:53:14 crc kubenswrapper[4803]: E0127 21:53:14.826424 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8\": container with ID starting with 2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8 not found: ID does not exist" containerID="2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.826472 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8"} err="failed to get container status \"2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8\": rpc error: code = NotFound desc = could not find container \"2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8\": container with ID starting with 2e4b76e760d0e670414cb25f555538379ada9c77ae63e520e7b1b55b1b66b2e8 not found: ID does not exist" Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.839868 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6"] Jan 27 21:53:14 crc kubenswrapper[4803]: I0127 21:53:14.842874 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bbbbdff7c-vcjn6"] Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.575069 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7df488d7f-9qs98"] Jan 27 21:53:15 crc kubenswrapper[4803]: E0127 21:53:15.575454 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160a3ef9-7414-455b-b63f-ae53a7f50e05" containerName="controller-manager" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.575477 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="160a3ef9-7414-455b-b63f-ae53a7f50e05" containerName="controller-manager" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.575644 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="160a3ef9-7414-455b-b63f-ae53a7f50e05" containerName="controller-manager" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.576449 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.586595 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.586764 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.586779 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.587758 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.588751 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.589398 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.591048 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7df488d7f-9qs98"] Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.596763 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.603056 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f37cfcbc-f864-4f97-804e-b5ba5313c347-serving-cert\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.603134 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37cfcbc-f864-4f97-804e-b5ba5313c347-config\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.603262 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f37cfcbc-f864-4f97-804e-b5ba5313c347-client-ca\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.603394 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkt84\" (UniqueName: \"kubernetes.io/projected/f37cfcbc-f864-4f97-804e-b5ba5313c347-kube-api-access-jkt84\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.603571 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f37cfcbc-f864-4f97-804e-b5ba5313c347-proxy-ca-bundles\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.704258 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f37cfcbc-f864-4f97-804e-b5ba5313c347-proxy-ca-bundles\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.704310 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f37cfcbc-f864-4f97-804e-b5ba5313c347-serving-cert\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.704329 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37cfcbc-f864-4f97-804e-b5ba5313c347-config\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.704356 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f37cfcbc-f864-4f97-804e-b5ba5313c347-client-ca\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.704399 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkt84\" (UniqueName: \"kubernetes.io/projected/f37cfcbc-f864-4f97-804e-b5ba5313c347-kube-api-access-jkt84\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.705504 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f37cfcbc-f864-4f97-804e-b5ba5313c347-client-ca\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.706111 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f37cfcbc-f864-4f97-804e-b5ba5313c347-proxy-ca-bundles\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.706592 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37cfcbc-f864-4f97-804e-b5ba5313c347-config\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.717512 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f37cfcbc-f864-4f97-804e-b5ba5313c347-serving-cert\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.720197 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkt84\" (UniqueName: \"kubernetes.io/projected/f37cfcbc-f864-4f97-804e-b5ba5313c347-kube-api-access-jkt84\") pod \"controller-manager-7df488d7f-9qs98\" (UID: \"f37cfcbc-f864-4f97-804e-b5ba5313c347\") " pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:15 crc kubenswrapper[4803]: I0127 21:53:15.904166 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:16 crc kubenswrapper[4803]: I0127 21:53:16.280543 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7df488d7f-9qs98"] Jan 27 21:53:16 crc kubenswrapper[4803]: I0127 21:53:16.313718 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160a3ef9-7414-455b-b63f-ae53a7f50e05" path="/var/lib/kubelet/pods/160a3ef9-7414-455b-b63f-ae53a7f50e05/volumes" Jan 27 21:53:16 crc kubenswrapper[4803]: I0127 21:53:16.343711 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:53:16 crc kubenswrapper[4803]: I0127 21:53:16.343775 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:53:16 crc kubenswrapper[4803]: I0127 21:53:16.816906 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" event={"ID":"f37cfcbc-f864-4f97-804e-b5ba5313c347","Type":"ContainerStarted","Data":"626bf7d2d2063c31dee0a7ff5af68e33526fb9a8872300b9a8c319817233a878"} Jan 27 21:53:16 crc kubenswrapper[4803]: I0127 21:53:16.816960 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" event={"ID":"f37cfcbc-f864-4f97-804e-b5ba5313c347","Type":"ContainerStarted","Data":"0f26734812e7646c2dec50b74cdfefd907628e5ed063c32abe79e66d31b9a6e4"} Jan 27 21:53:16 crc kubenswrapper[4803]: I0127 21:53:16.817105 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:16 crc kubenswrapper[4803]: I0127 21:53:16.824499 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 21:53:16 crc kubenswrapper[4803]: I0127 21:53:16.848371 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podStartSLOduration=2.848334269 podStartE2EDuration="2.848334269s" podCreationTimestamp="2026-01-27 21:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:53:16.833229952 +0000 UTC m=+349.249251661" watchObservedRunningTime="2026-01-27 21:53:16.848334269 +0000 UTC m=+349.264355988" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.315835 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" podUID="a2e0fd9f-4917-4c1c-8b58-f952407e7e68" containerName="registry" containerID="cri-o://8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145" gracePeriod=30 Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.839662 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.845759 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkg89\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-kube-api-access-hkg89\") pod \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.845803 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-installation-pull-secrets\") pod \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.845870 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-bound-sa-token\") pod \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.845914 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-ca-trust-extracted\") pod \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.845939 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-trusted-ca\") pod \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.845960 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-certificates\") pod \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.846068 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.846087 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-tls\") pod \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\" (UID: \"a2e0fd9f-4917-4c1c-8b58-f952407e7e68\") " Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.847191 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a2e0fd9f-4917-4c1c-8b58-f952407e7e68" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.847210 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a2e0fd9f-4917-4c1c-8b58-f952407e7e68" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.852287 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a2e0fd9f-4917-4c1c-8b58-f952407e7e68" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.852285 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-kube-api-access-hkg89" (OuterVolumeSpecName: "kube-api-access-hkg89") pod "a2e0fd9f-4917-4c1c-8b58-f952407e7e68" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68"). InnerVolumeSpecName "kube-api-access-hkg89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.854101 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a2e0fd9f-4917-4c1c-8b58-f952407e7e68" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.854668 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a2e0fd9f-4917-4c1c-8b58-f952407e7e68" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.869957 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a2e0fd9f-4917-4c1c-8b58-f952407e7e68" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.875939 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a2e0fd9f-4917-4c1c-8b58-f952407e7e68" (UID: "a2e0fd9f-4917-4c1c-8b58-f952407e7e68"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.914905 4803 generic.go:334] "Generic (PLEG): container finished" podID="a2e0fd9f-4917-4c1c-8b58-f952407e7e68" containerID="8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145" exitCode=0 Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.914962 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.914957 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" event={"ID":"a2e0fd9f-4917-4c1c-8b58-f952407e7e68","Type":"ContainerDied","Data":"8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145"} Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.915091 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bbljw" event={"ID":"a2e0fd9f-4917-4c1c-8b58-f952407e7e68","Type":"ContainerDied","Data":"4fc7dce46dacff4e5657fc9fc1b685b68b946b78e060991ab304ba7d734dfdd6"} Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.915114 4803 scope.go:117] "RemoveContainer" containerID="8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.939984 4803 scope.go:117] "RemoveContainer" containerID="8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145" Jan 27 21:53:33 crc kubenswrapper[4803]: E0127 21:53:33.941496 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145\": container with ID starting with 8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145 not found: ID does not exist" containerID="8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.941605 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145"} err="failed to get container status \"8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145\": rpc error: code = NotFound desc = could not find container \"8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145\": container with ID starting with 8c1035790da8af903deee2c9212bfdec811f91899acb2116836dbd9c273dd145 not found: ID does not exist" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.948188 4803 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.948232 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.948245 4803 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.948259 4803 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.948271 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkg89\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-kube-api-access-hkg89\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.948282 4803 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.948293 4803 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2e0fd9f-4917-4c1c-8b58-f952407e7e68-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.951703 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbljw"] Jan 27 21:53:33 crc kubenswrapper[4803]: I0127 21:53:33.955428 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbljw"] Jan 27 21:53:34 crc kubenswrapper[4803]: I0127 21:53:34.318046 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e0fd9f-4917-4c1c-8b58-f952407e7e68" path="/var/lib/kubelet/pods/a2e0fd9f-4917-4c1c-8b58-f952407e7e68/volumes" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.257256 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m24l6"] Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.258248 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m24l6" podUID="6a2e67f5-2414-4850-a255-53737799d98b" containerName="registry-server" containerID="cri-o://c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134" gracePeriod=30 Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.281093 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pmd2q"] Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.281836 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pmd2q" podUID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" containerName="registry-server" containerID="cri-o://990039d243b7a0d79cc1a8360fb8706ad0615ac19a422edb3af2c75e5f3fc675" gracePeriod=30 Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.299821 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n7mdf"] Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.300347 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" podUID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" containerName="marketplace-operator" containerID="cri-o://12def603a43e4c904acbe458342b78366ea296dbf65f1eb128344ebd091f0bcf" gracePeriod=30 Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.317305 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbg9j"] Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.317342 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wrpjf"] Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.317569 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wrpjf" podUID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" containerName="registry-server" containerID="cri-o://a9528c792f84d4a25d37955d284f8e27afa90ac4949ae3fa3f4e51b091ce208c" gracePeriod=30 Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.317792 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sbg9j" podUID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" containerName="registry-server" containerID="cri-o://5f8c63c87ebdf26cc3572e28225590b76f0b908e5448fd4746f2d7efc03e741e" gracePeriod=30 Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.329905 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlj5d"] Jan 27 21:53:46 crc kubenswrapper[4803]: E0127 21:53:46.330230 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e0fd9f-4917-4c1c-8b58-f952407e7e68" containerName="registry" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.330260 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e0fd9f-4917-4c1c-8b58-f952407e7e68" containerName="registry" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.330441 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e0fd9f-4917-4c1c-8b58-f952407e7e68" containerName="registry" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.331057 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.334452 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlj5d"] Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.343336 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.343426 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.406835 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b1c25f0-10e5-41a3-81ca-aef5372a4d38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlj5d\" (UID: \"2b1c25f0-10e5-41a3-81ca-aef5372a4d38\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.406900 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b1c25f0-10e5-41a3-81ca-aef5372a4d38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlj5d\" (UID: \"2b1c25f0-10e5-41a3-81ca-aef5372a4d38\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.407059 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l889f\" (UniqueName: \"kubernetes.io/projected/2b1c25f0-10e5-41a3-81ca-aef5372a4d38-kube-api-access-l889f\") pod \"marketplace-operator-79b997595-vlj5d\" (UID: \"2b1c25f0-10e5-41a3-81ca-aef5372a4d38\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.508616 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b1c25f0-10e5-41a3-81ca-aef5372a4d38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlj5d\" (UID: \"2b1c25f0-10e5-41a3-81ca-aef5372a4d38\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.508663 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b1c25f0-10e5-41a3-81ca-aef5372a4d38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlj5d\" (UID: \"2b1c25f0-10e5-41a3-81ca-aef5372a4d38\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.508705 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l889f\" (UniqueName: \"kubernetes.io/projected/2b1c25f0-10e5-41a3-81ca-aef5372a4d38-kube-api-access-l889f\") pod \"marketplace-operator-79b997595-vlj5d\" (UID: \"2b1c25f0-10e5-41a3-81ca-aef5372a4d38\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.510265 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b1c25f0-10e5-41a3-81ca-aef5372a4d38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlj5d\" (UID: \"2b1c25f0-10e5-41a3-81ca-aef5372a4d38\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.514881 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b1c25f0-10e5-41a3-81ca-aef5372a4d38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlj5d\" (UID: \"2b1c25f0-10e5-41a3-81ca-aef5372a4d38\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.525255 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l889f\" (UniqueName: \"kubernetes.io/projected/2b1c25f0-10e5-41a3-81ca-aef5372a4d38-kube-api-access-l889f\") pod \"marketplace-operator-79b997595-vlj5d\" (UID: \"2b1c25f0-10e5-41a3-81ca-aef5372a4d38\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.727076 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.785325 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.811967 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-catalog-content\") pod \"6a2e67f5-2414-4850-a255-53737799d98b\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.812016 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4wt7\" (UniqueName: \"kubernetes.io/projected/6a2e67f5-2414-4850-a255-53737799d98b-kube-api-access-c4wt7\") pod \"6a2e67f5-2414-4850-a255-53737799d98b\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.812102 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-utilities\") pod \"6a2e67f5-2414-4850-a255-53737799d98b\" (UID: \"6a2e67f5-2414-4850-a255-53737799d98b\") " Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.823649 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-utilities" (OuterVolumeSpecName: "utilities") pod "6a2e67f5-2414-4850-a255-53737799d98b" (UID: "6a2e67f5-2414-4850-a255-53737799d98b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.835065 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a2e67f5-2414-4850-a255-53737799d98b-kube-api-access-c4wt7" (OuterVolumeSpecName: "kube-api-access-c4wt7") pod "6a2e67f5-2414-4850-a255-53737799d98b" (UID: "6a2e67f5-2414-4850-a255-53737799d98b"). InnerVolumeSpecName "kube-api-access-c4wt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.868555 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a2e67f5-2414-4850-a255-53737799d98b" (UID: "6a2e67f5-2414-4850-a255-53737799d98b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.917431 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.917467 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4wt7\" (UniqueName: \"kubernetes.io/projected/6a2e67f5-2414-4850-a255-53737799d98b-kube-api-access-c4wt7\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.917480 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a2e67f5-2414-4850-a255-53737799d98b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.988699 4803 generic.go:334] "Generic (PLEG): container finished" podID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" containerID="5f8c63c87ebdf26cc3572e28225590b76f0b908e5448fd4746f2d7efc03e741e" exitCode=0 Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.988781 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbg9j" event={"ID":"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1","Type":"ContainerDied","Data":"5f8c63c87ebdf26cc3572e28225590b76f0b908e5448fd4746f2d7efc03e741e"} Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.990968 4803 generic.go:334] "Generic (PLEG): container finished" podID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" containerID="a9528c792f84d4a25d37955d284f8e27afa90ac4949ae3fa3f4e51b091ce208c" exitCode=0 Jan 27 21:53:46 crc kubenswrapper[4803]: I0127 21:53:46.991038 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrpjf" event={"ID":"467bcdf9-e419-4ef2-84af-2cfedbfa28f2","Type":"ContainerDied","Data":"a9528c792f84d4a25d37955d284f8e27afa90ac4949ae3fa3f4e51b091ce208c"} Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.007362 4803 generic.go:334] "Generic (PLEG): container finished" podID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" containerID="12def603a43e4c904acbe458342b78366ea296dbf65f1eb128344ebd091f0bcf" exitCode=0 Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.007405 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" event={"ID":"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0","Type":"ContainerDied","Data":"12def603a43e4c904acbe458342b78366ea296dbf65f1eb128344ebd091f0bcf"} Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.007473 4803 scope.go:117] "RemoveContainer" containerID="69e7c83be0df564cb9724449030dd860fee239fa3e3d4f482149da324626e2cc" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.017686 4803 generic.go:334] "Generic (PLEG): container finished" podID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" containerID="990039d243b7a0d79cc1a8360fb8706ad0615ac19a422edb3af2c75e5f3fc675" exitCode=0 Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.017771 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmd2q" event={"ID":"f63e0833-14f7-4d43-805c-a5a05c2fdf02","Type":"ContainerDied","Data":"990039d243b7a0d79cc1a8360fb8706ad0615ac19a422edb3af2c75e5f3fc675"} Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.023336 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.023559 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m24l6" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.023640 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24l6" event={"ID":"6a2e67f5-2414-4850-a255-53737799d98b","Type":"ContainerDied","Data":"c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134"} Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.023472 4803 generic.go:334] "Generic (PLEG): container finished" podID="6a2e67f5-2414-4850-a255-53737799d98b" containerID="c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134" exitCode=0 Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.025950 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24l6" event={"ID":"6a2e67f5-2414-4850-a255-53737799d98b","Type":"ContainerDied","Data":"a38f7ac3e855aa183ac4a18f5722a4a49596b852801095c90dd00bf339be1390"} Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.033366 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.034355 4803 scope.go:117] "RemoveContainer" containerID="c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.036337 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.056304 4803 scope.go:117] "RemoveContainer" containerID="caf1730c6fee8c1714eb37929b9ba40dacf759c9f3a3887c8b405380564a10f3" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.064620 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.086247 4803 scope.go:117] "RemoveContainer" containerID="28dbcf471c63ece8a022bc99db0b9e1548972c96bba85021871e0ada531febb2" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.110084 4803 scope.go:117] "RemoveContainer" containerID="c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134" Jan 27 21:53:47 crc kubenswrapper[4803]: E0127 21:53:47.111633 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134\": container with ID starting with c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134 not found: ID does not exist" containerID="c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.111662 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134"} err="failed to get container status \"c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134\": rpc error: code = NotFound desc = could not find container \"c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134\": container with ID starting with c04d6935833e1f071ab04d19ff003dca57f772d10934445cbf4dafe83292a134 not found: ID does not exist" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.111699 4803 scope.go:117] "RemoveContainer" containerID="caf1730c6fee8c1714eb37929b9ba40dacf759c9f3a3887c8b405380564a10f3" Jan 27 21:53:47 crc kubenswrapper[4803]: E0127 21:53:47.113364 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf1730c6fee8c1714eb37929b9ba40dacf759c9f3a3887c8b405380564a10f3\": container with ID starting with caf1730c6fee8c1714eb37929b9ba40dacf759c9f3a3887c8b405380564a10f3 not found: ID does not exist" containerID="caf1730c6fee8c1714eb37929b9ba40dacf759c9f3a3887c8b405380564a10f3" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.113388 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf1730c6fee8c1714eb37929b9ba40dacf759c9f3a3887c8b405380564a10f3"} err="failed to get container status \"caf1730c6fee8c1714eb37929b9ba40dacf759c9f3a3887c8b405380564a10f3\": rpc error: code = NotFound desc = could not find container \"caf1730c6fee8c1714eb37929b9ba40dacf759c9f3a3887c8b405380564a10f3\": container with ID starting with caf1730c6fee8c1714eb37929b9ba40dacf759c9f3a3887c8b405380564a10f3 not found: ID does not exist" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.113404 4803 scope.go:117] "RemoveContainer" containerID="28dbcf471c63ece8a022bc99db0b9e1548972c96bba85021871e0ada531febb2" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.113466 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m24l6"] Jan 27 21:53:47 crc kubenswrapper[4803]: E0127 21:53:47.113728 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28dbcf471c63ece8a022bc99db0b9e1548972c96bba85021871e0ada531febb2\": container with ID starting with 28dbcf471c63ece8a022bc99db0b9e1548972c96bba85021871e0ada531febb2 not found: ID does not exist" containerID="28dbcf471c63ece8a022bc99db0b9e1548972c96bba85021871e0ada531febb2" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.113744 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28dbcf471c63ece8a022bc99db0b9e1548972c96bba85021871e0ada531febb2"} err="failed to get container status \"28dbcf471c63ece8a022bc99db0b9e1548972c96bba85021871e0ada531febb2\": rpc error: code = NotFound desc = could not find container \"28dbcf471c63ece8a022bc99db0b9e1548972c96bba85021871e0ada531febb2\": container with ID starting with 28dbcf471c63ece8a022bc99db0b9e1548972c96bba85021871e0ada531febb2 not found: ID does not exist" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.116508 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m24l6"] Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122161 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-catalog-content\") pod \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122215 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t7zb\" (UniqueName: \"kubernetes.io/projected/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-kube-api-access-4t7zb\") pod \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122250 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-trusted-ca\") pod \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122317 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-catalog-content\") pod \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122356 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-operator-metrics\") pod \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122383 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-utilities\") pod \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\" (UID: \"467bcdf9-e419-4ef2-84af-2cfedbfa28f2\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122407 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-catalog-content\") pod \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122460 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vsrk\" (UniqueName: \"kubernetes.io/projected/f63e0833-14f7-4d43-805c-a5a05c2fdf02-kube-api-access-8vsrk\") pod \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122493 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-utilities\") pod \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\" (UID: \"f63e0833-14f7-4d43-805c-a5a05c2fdf02\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122547 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz96c\" (UniqueName: \"kubernetes.io/projected/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-kube-api-access-vz96c\") pod \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122572 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-utilities\") pod \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\" (UID: \"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.122602 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkhrm\" (UniqueName: \"kubernetes.io/projected/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-kube-api-access-wkhrm\") pod \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\" (UID: \"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0\") " Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.123252 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" (UID: "4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.123522 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-utilities" (OuterVolumeSpecName: "utilities") pod "f63e0833-14f7-4d43-805c-a5a05c2fdf02" (UID: "f63e0833-14f7-4d43-805c-a5a05c2fdf02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.123767 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-utilities" (OuterVolumeSpecName: "utilities") pod "e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" (UID: "e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.125146 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-utilities" (OuterVolumeSpecName: "utilities") pod "467bcdf9-e419-4ef2-84af-2cfedbfa28f2" (UID: "467bcdf9-e419-4ef2-84af-2cfedbfa28f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.126657 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-kube-api-access-vz96c" (OuterVolumeSpecName: "kube-api-access-vz96c") pod "e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" (UID: "e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1"). InnerVolumeSpecName "kube-api-access-vz96c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.126778 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63e0833-14f7-4d43-805c-a5a05c2fdf02-kube-api-access-8vsrk" (OuterVolumeSpecName: "kube-api-access-8vsrk") pod "f63e0833-14f7-4d43-805c-a5a05c2fdf02" (UID: "f63e0833-14f7-4d43-805c-a5a05c2fdf02"). InnerVolumeSpecName "kube-api-access-8vsrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.127053 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-kube-api-access-wkhrm" (OuterVolumeSpecName: "kube-api-access-wkhrm") pod "4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" (UID: "4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0"). InnerVolumeSpecName "kube-api-access-wkhrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.127132 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" (UID: "4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.128261 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-kube-api-access-4t7zb" (OuterVolumeSpecName: "kube-api-access-4t7zb") pod "467bcdf9-e419-4ef2-84af-2cfedbfa28f2" (UID: "467bcdf9-e419-4ef2-84af-2cfedbfa28f2"). InnerVolumeSpecName "kube-api-access-4t7zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.146015 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" (UID: "e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.186357 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f63e0833-14f7-4d43-805c-a5a05c2fdf02" (UID: "f63e0833-14f7-4d43-805c-a5a05c2fdf02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.223372 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vsrk\" (UniqueName: \"kubernetes.io/projected/f63e0833-14f7-4d43-805c-a5a05c2fdf02-kube-api-access-8vsrk\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.223402 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.223411 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz96c\" (UniqueName: \"kubernetes.io/projected/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-kube-api-access-vz96c\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.223420 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.223428 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkhrm\" (UniqueName: \"kubernetes.io/projected/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-kube-api-access-wkhrm\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.223436 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.223444 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t7zb\" (UniqueName: \"kubernetes.io/projected/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-kube-api-access-4t7zb\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.223452 4803 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.223460 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.223468 4803 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.223477 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63e0833-14f7-4d43-805c-a5a05c2fdf02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.239777 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "467bcdf9-e419-4ef2-84af-2cfedbfa28f2" (UID: "467bcdf9-e419-4ef2-84af-2cfedbfa28f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.324998 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467bcdf9-e419-4ef2-84af-2cfedbfa28f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:47 crc kubenswrapper[4803]: I0127 21:53:47.350132 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlj5d"] Jan 27 21:53:47 crc kubenswrapper[4803]: W0127 21:53:47.355781 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b1c25f0_10e5_41a3_81ca_aef5372a4d38.slice/crio-5722b5f77981bb4edd37e2eb06a75c6d80162f2055b2438f708d8e0df2510f01 WatchSource:0}: Error finding container 5722b5f77981bb4edd37e2eb06a75c6d80162f2055b2438f708d8e0df2510f01: Status 404 returned error can't find the container with id 5722b5f77981bb4edd37e2eb06a75c6d80162f2055b2438f708d8e0df2510f01 Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.034019 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" event={"ID":"2b1c25f0-10e5-41a3-81ca-aef5372a4d38","Type":"ContainerStarted","Data":"90a62bdcb5a552347091f5153ec67d950d4493f9a3ac98b4bdc9806515e06dbf"} Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.034372 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.034390 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" event={"ID":"2b1c25f0-10e5-41a3-81ca-aef5372a4d38","Type":"ContainerStarted","Data":"5722b5f77981bb4edd37e2eb06a75c6d80162f2055b2438f708d8e0df2510f01"} Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.036110 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbg9j" event={"ID":"e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1","Type":"ContainerDied","Data":"25518cdcfb367ea8f1ce8a57af4051c97104ad2a4c6d2ed9d607946363208620"} Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.036155 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbg9j" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.036160 4803 scope.go:117] "RemoveContainer" containerID="5f8c63c87ebdf26cc3572e28225590b76f0b908e5448fd4746f2d7efc03e741e" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.036998 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.038570 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrpjf" event={"ID":"467bcdf9-e419-4ef2-84af-2cfedbfa28f2","Type":"ContainerDied","Data":"19f55507a8a010e76c0973a3217d8f60c0d5d9130d33e31153e5c037bc470047"} Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.038654 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrpjf" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.042279 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" event={"ID":"4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0","Type":"ContainerDied","Data":"418b702a475d92a9844e05f95416fd1d9d44549b14290eaac5c39d96664264df"} Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.042306 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n7mdf" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.046495 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmd2q" event={"ID":"f63e0833-14f7-4d43-805c-a5a05c2fdf02","Type":"ContainerDied","Data":"a6a3911329c8bcfa5db3e7b2426681c642e4ec4f053ff6f1f2cffe5609cb7fe4"} Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.046594 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmd2q" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.051180 4803 scope.go:117] "RemoveContainer" containerID="ea2e4d96356794a469c077c354c1730ca3e53cbfde7e939c9cb1a5893132e6b8" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.056248 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podStartSLOduration=2.056231244 podStartE2EDuration="2.056231244s" podCreationTimestamp="2026-01-27 21:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:53:48.055550026 +0000 UTC m=+380.471571735" watchObservedRunningTime="2026-01-27 21:53:48.056231244 +0000 UTC m=+380.472252943" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.092608 4803 scope.go:117] "RemoveContainer" containerID="a06d2da1a78457d7ba76299907472d260b396c0f3e253553768e417dab343b70" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.115437 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n7mdf"] Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.126158 4803 scope.go:117] "RemoveContainer" containerID="a9528c792f84d4a25d37955d284f8e27afa90ac4949ae3fa3f4e51b091ce208c" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.127009 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n7mdf"] Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.131857 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pmd2q"] Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.133816 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pmd2q"] Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.144624 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wrpjf"] Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.144703 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wrpjf"] Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.156490 4803 scope.go:117] "RemoveContainer" containerID="ff03cf3e18368e2a72134fdbe6b40f5d3160fa446e035b6b0e8cbc8030700a17" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.160571 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbg9j"] Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.163630 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbg9j"] Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.175668 4803 scope.go:117] "RemoveContainer" containerID="b8d73f7507c01f01f374d273374f4296833792271ce126cd7e5e3f9078796ad8" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.192016 4803 scope.go:117] "RemoveContainer" containerID="12def603a43e4c904acbe458342b78366ea296dbf65f1eb128344ebd091f0bcf" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.204071 4803 scope.go:117] "RemoveContainer" containerID="990039d243b7a0d79cc1a8360fb8706ad0615ac19a422edb3af2c75e5f3fc675" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.216334 4803 scope.go:117] "RemoveContainer" containerID="0ae570f7b52d2a284c5bd60307d9619ec2e4a195d78f5231c72c91f6c9ebc389" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.232164 4803 scope.go:117] "RemoveContainer" containerID="81a842133c5c513fdecf09ec6e939675b4ff2fd0101dbf2db3108f63104dfdd5" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.316320 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" path="/var/lib/kubelet/pods/4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0/volumes" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.317238 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" path="/var/lib/kubelet/pods/467bcdf9-e419-4ef2-84af-2cfedbfa28f2/volumes" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.318303 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a2e67f5-2414-4850-a255-53737799d98b" path="/var/lib/kubelet/pods/6a2e67f5-2414-4850-a255-53737799d98b/volumes" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.319630 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" path="/var/lib/kubelet/pods/e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1/volumes" Jan 27 21:53:48 crc kubenswrapper[4803]: I0127 21:53:48.320372 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" path="/var/lib/kubelet/pods/f63e0833-14f7-4d43-805c-a5a05c2fdf02/volumes" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.072555 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9crs2"] Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073130 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2e67f5-2414-4850-a255-53737799d98b" containerName="extract-utilities" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073147 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2e67f5-2414-4850-a255-53737799d98b" containerName="extract-utilities" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073160 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" containerName="extract-content" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073168 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" containerName="extract-content" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073181 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2e67f5-2414-4850-a255-53737799d98b" containerName="extract-content" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073189 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2e67f5-2414-4850-a255-53737799d98b" containerName="extract-content" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073200 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" containerName="marketplace-operator" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073209 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" containerName="marketplace-operator" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073218 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" containerName="extract-utilities" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073225 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" containerName="extract-utilities" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073236 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" containerName="extract-content" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073243 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" containerName="extract-content" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073255 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" containerName="extract-utilities" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073262 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" containerName="extract-utilities" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073272 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073279 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073287 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" containerName="extract-utilities" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073293 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" containerName="extract-utilities" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073303 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073312 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073322 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073329 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073342 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" containerName="extract-content" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073348 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" containerName="extract-content" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073357 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2e67f5-2414-4850-a255-53737799d98b" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073364 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2e67f5-2414-4850-a255-53737799d98b" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073476 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" containerName="marketplace-operator" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073490 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63e0833-14f7-4d43-805c-a5a05c2fdf02" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073501 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a2e67f5-2414-4850-a255-53737799d98b" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073510 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" containerName="marketplace-operator" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073521 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ca1fa7-a955-45ad-8e86-b8b34b6e9aa1" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073533 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="467bcdf9-e419-4ef2-84af-2cfedbfa28f2" containerName="registry-server" Jan 27 21:53:49 crc kubenswrapper[4803]: E0127 21:53:49.073634 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" containerName="marketplace-operator" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.073643 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4194a6bb-5fcd-41e2-a1c0-9d5f743f31a0" containerName="marketplace-operator" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.074383 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.076901 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.095171 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9crs2"] Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.144000 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5265b8b-6b21-4c52-be79-e6c2a2f94a1d-utilities\") pod \"certified-operators-9crs2\" (UID: \"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d\") " pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.144274 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47g7d\" (UniqueName: \"kubernetes.io/projected/a5265b8b-6b21-4c52-be79-e6c2a2f94a1d-kube-api-access-47g7d\") pod \"certified-operators-9crs2\" (UID: \"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d\") " pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.144370 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5265b8b-6b21-4c52-be79-e6c2a2f94a1d-catalog-content\") pod \"certified-operators-9crs2\" (UID: \"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d\") " pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.245323 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5265b8b-6b21-4c52-be79-e6c2a2f94a1d-catalog-content\") pod \"certified-operators-9crs2\" (UID: \"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d\") " pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.245644 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5265b8b-6b21-4c52-be79-e6c2a2f94a1d-utilities\") pod \"certified-operators-9crs2\" (UID: \"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d\") " pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.245735 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47g7d\" (UniqueName: \"kubernetes.io/projected/a5265b8b-6b21-4c52-be79-e6c2a2f94a1d-kube-api-access-47g7d\") pod \"certified-operators-9crs2\" (UID: \"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d\") " pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.245882 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5265b8b-6b21-4c52-be79-e6c2a2f94a1d-catalog-content\") pod \"certified-operators-9crs2\" (UID: \"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d\") " pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.246083 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5265b8b-6b21-4c52-be79-e6c2a2f94a1d-utilities\") pod \"certified-operators-9crs2\" (UID: \"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d\") " pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.267800 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47g7d\" (UniqueName: \"kubernetes.io/projected/a5265b8b-6b21-4c52-be79-e6c2a2f94a1d-kube-api-access-47g7d\") pod \"certified-operators-9crs2\" (UID: \"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d\") " pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.394644 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:49 crc kubenswrapper[4803]: I0127 21:53:49.776731 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9crs2"] Jan 27 21:53:49 crc kubenswrapper[4803]: W0127 21:53:49.786336 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5265b8b_6b21_4c52_be79_e6c2a2f94a1d.slice/crio-5289d9ce4e35b789af3ea8328abf9e46c866b3a21ba510093c831e08c5dbaeba WatchSource:0}: Error finding container 5289d9ce4e35b789af3ea8328abf9e46c866b3a21ba510093c831e08c5dbaeba: Status 404 returned error can't find the container with id 5289d9ce4e35b789af3ea8328abf9e46c866b3a21ba510093c831e08c5dbaeba Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.069460 4803 generic.go:334] "Generic (PLEG): container finished" podID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerID="62b4f157c37610a314952dc3040f8e08f615ef3f12e6b7d990ddee3363e8d826" exitCode=0 Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.069532 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9crs2" event={"ID":"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d","Type":"ContainerDied","Data":"62b4f157c37610a314952dc3040f8e08f615ef3f12e6b7d990ddee3363e8d826"} Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.070278 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9crs2" event={"ID":"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d","Type":"ContainerStarted","Data":"5289d9ce4e35b789af3ea8328abf9e46c866b3a21ba510093c831e08c5dbaeba"} Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.070331 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-knvxh"] Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.072507 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.075783 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.084677 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knvxh"] Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.158111 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-utilities\") pod \"redhat-operators-knvxh\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.158177 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-catalog-content\") pod \"redhat-operators-knvxh\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.158232 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hznsm\" (UniqueName: \"kubernetes.io/projected/0b16bfe1-a641-480e-aef3-9217bd7f8842-kube-api-access-hznsm\") pod \"redhat-operators-knvxh\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.259526 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hznsm\" (UniqueName: \"kubernetes.io/projected/0b16bfe1-a641-480e-aef3-9217bd7f8842-kube-api-access-hznsm\") pod \"redhat-operators-knvxh\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.259717 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-utilities\") pod \"redhat-operators-knvxh\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.259768 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-catalog-content\") pod \"redhat-operators-knvxh\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.260287 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-utilities\") pod \"redhat-operators-knvxh\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.260293 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-catalog-content\") pod \"redhat-operators-knvxh\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.278037 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hznsm\" (UniqueName: \"kubernetes.io/projected/0b16bfe1-a641-480e-aef3-9217bd7f8842-kube-api-access-hznsm\") pod \"redhat-operators-knvxh\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.402718 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:53:50 crc kubenswrapper[4803]: I0127 21:53:50.802882 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knvxh"] Jan 27 21:53:50 crc kubenswrapper[4803]: W0127 21:53:50.807043 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b16bfe1_a641_480e_aef3_9217bd7f8842.slice/crio-50bd4c303fa4f87b888095e7dcce7d6bedf8dac02dfb3224ac51f61ce19a0d04 WatchSource:0}: Error finding container 50bd4c303fa4f87b888095e7dcce7d6bedf8dac02dfb3224ac51f61ce19a0d04: Status 404 returned error can't find the container with id 50bd4c303fa4f87b888095e7dcce7d6bedf8dac02dfb3224ac51f61ce19a0d04 Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.076813 4803 generic.go:334] "Generic (PLEG): container finished" podID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerID="d6bb0b5bfb7b55f5783c0a08c9cd7bedf773f6f260d352da309f0bac308b76cb" exitCode=0 Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.076904 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9crs2" event={"ID":"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d","Type":"ContainerDied","Data":"d6bb0b5bfb7b55f5783c0a08c9cd7bedf773f6f260d352da309f0bac308b76cb"} Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.079473 4803 generic.go:334] "Generic (PLEG): container finished" podID="0b16bfe1-a641-480e-aef3-9217bd7f8842" containerID="6bd1ebdc6582812beb55a292517e8ebb9cc67e9dfd34127182fa5425b816a6de" exitCode=0 Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.079532 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvxh" event={"ID":"0b16bfe1-a641-480e-aef3-9217bd7f8842","Type":"ContainerDied","Data":"6bd1ebdc6582812beb55a292517e8ebb9cc67e9dfd34127182fa5425b816a6de"} Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.079570 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvxh" event={"ID":"0b16bfe1-a641-480e-aef3-9217bd7f8842","Type":"ContainerStarted","Data":"50bd4c303fa4f87b888095e7dcce7d6bedf8dac02dfb3224ac51f61ce19a0d04"} Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.467145 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9nds5"] Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.468376 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.474801 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.486443 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nds5"] Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.574555 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f28d4382-79f1-4254-a4fa-fced45178594-catalog-content\") pod \"community-operators-9nds5\" (UID: \"f28d4382-79f1-4254-a4fa-fced45178594\") " pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.574902 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb4mr\" (UniqueName: \"kubernetes.io/projected/f28d4382-79f1-4254-a4fa-fced45178594-kube-api-access-mb4mr\") pod \"community-operators-9nds5\" (UID: \"f28d4382-79f1-4254-a4fa-fced45178594\") " pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.574936 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f28d4382-79f1-4254-a4fa-fced45178594-utilities\") pod \"community-operators-9nds5\" (UID: \"f28d4382-79f1-4254-a4fa-fced45178594\") " pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.675706 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb4mr\" (UniqueName: \"kubernetes.io/projected/f28d4382-79f1-4254-a4fa-fced45178594-kube-api-access-mb4mr\") pod \"community-operators-9nds5\" (UID: \"f28d4382-79f1-4254-a4fa-fced45178594\") " pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.675777 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f28d4382-79f1-4254-a4fa-fced45178594-utilities\") pod \"community-operators-9nds5\" (UID: \"f28d4382-79f1-4254-a4fa-fced45178594\") " pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.675818 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f28d4382-79f1-4254-a4fa-fced45178594-catalog-content\") pod \"community-operators-9nds5\" (UID: \"f28d4382-79f1-4254-a4fa-fced45178594\") " pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.676241 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f28d4382-79f1-4254-a4fa-fced45178594-catalog-content\") pod \"community-operators-9nds5\" (UID: \"f28d4382-79f1-4254-a4fa-fced45178594\") " pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.676364 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f28d4382-79f1-4254-a4fa-fced45178594-utilities\") pod \"community-operators-9nds5\" (UID: \"f28d4382-79f1-4254-a4fa-fced45178594\") " pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.695321 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb4mr\" (UniqueName: \"kubernetes.io/projected/f28d4382-79f1-4254-a4fa-fced45178594-kube-api-access-mb4mr\") pod \"community-operators-9nds5\" (UID: \"f28d4382-79f1-4254-a4fa-fced45178594\") " pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:53:51 crc kubenswrapper[4803]: I0127 21:53:51.846426 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.087640 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9crs2" event={"ID":"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d","Type":"ContainerStarted","Data":"1c84092e5a169af46263a90c73f579ab311ad67ffe76af8648b49a818e27a622"} Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.089666 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvxh" event={"ID":"0b16bfe1-a641-480e-aef3-9217bd7f8842","Type":"ContainerStarted","Data":"2cbe96141128726d54bbac41be7ebec0f46dc3c57f3767f62c00be824457febb"} Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.106324 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9crs2" podStartSLOduration=1.7013111859999999 podStartE2EDuration="3.106314738s" podCreationTimestamp="2026-01-27 21:53:49 +0000 UTC" firstStartedPulling="2026-01-27 21:53:50.071952274 +0000 UTC m=+382.487973973" lastFinishedPulling="2026-01-27 21:53:51.476955826 +0000 UTC m=+383.892977525" observedRunningTime="2026-01-27 21:53:52.102333621 +0000 UTC m=+384.518355320" watchObservedRunningTime="2026-01-27 21:53:52.106314738 +0000 UTC m=+384.522336437" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.244804 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nds5"] Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.468974 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j4445"] Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.470640 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.472831 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.479401 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4445"] Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.484816 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7mp\" (UniqueName: \"kubernetes.io/projected/b99815d1-e732-429a-afb0-7e2328eb4a80-kube-api-access-9w7mp\") pod \"redhat-marketplace-j4445\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.484878 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-utilities\") pod \"redhat-marketplace-j4445\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.484913 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-catalog-content\") pod \"redhat-marketplace-j4445\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.585862 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-catalog-content\") pod \"redhat-marketplace-j4445\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.585945 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7mp\" (UniqueName: \"kubernetes.io/projected/b99815d1-e732-429a-afb0-7e2328eb4a80-kube-api-access-9w7mp\") pod \"redhat-marketplace-j4445\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.585974 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-utilities\") pod \"redhat-marketplace-j4445\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.586340 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-utilities\") pod \"redhat-marketplace-j4445\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.586542 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-catalog-content\") pod \"redhat-marketplace-j4445\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.603452 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w7mp\" (UniqueName: \"kubernetes.io/projected/b99815d1-e732-429a-afb0-7e2328eb4a80-kube-api-access-9w7mp\") pod \"redhat-marketplace-j4445\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:53:52 crc kubenswrapper[4803]: I0127 21:53:52.787776 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:53:53 crc kubenswrapper[4803]: I0127 21:53:53.096581 4803 generic.go:334] "Generic (PLEG): container finished" podID="f28d4382-79f1-4254-a4fa-fced45178594" containerID="0f5467bf568643fd20a976381288c309b3ae4ab37677e7553d09400461f693da" exitCode=0 Jan 27 21:53:53 crc kubenswrapper[4803]: I0127 21:53:53.096775 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nds5" event={"ID":"f28d4382-79f1-4254-a4fa-fced45178594","Type":"ContainerDied","Data":"0f5467bf568643fd20a976381288c309b3ae4ab37677e7553d09400461f693da"} Jan 27 21:53:53 crc kubenswrapper[4803]: I0127 21:53:53.096918 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nds5" event={"ID":"f28d4382-79f1-4254-a4fa-fced45178594","Type":"ContainerStarted","Data":"a3f24c1f4963bf3feea2b166e429fb1550a110175a66c6ccfbb683d1915292ee"} Jan 27 21:53:53 crc kubenswrapper[4803]: I0127 21:53:53.099567 4803 generic.go:334] "Generic (PLEG): container finished" podID="0b16bfe1-a641-480e-aef3-9217bd7f8842" containerID="2cbe96141128726d54bbac41be7ebec0f46dc3c57f3767f62c00be824457febb" exitCode=0 Jan 27 21:53:53 crc kubenswrapper[4803]: I0127 21:53:53.100408 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvxh" event={"ID":"0b16bfe1-a641-480e-aef3-9217bd7f8842","Type":"ContainerDied","Data":"2cbe96141128726d54bbac41be7ebec0f46dc3c57f3767f62c00be824457febb"} Jan 27 21:53:53 crc kubenswrapper[4803]: I0127 21:53:53.185189 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4445"] Jan 27 21:53:53 crc kubenswrapper[4803]: W0127 21:53:53.188647 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb99815d1_e732_429a_afb0_7e2328eb4a80.slice/crio-37d22dfdc90a75a211ecda61a0f832de1edbbaf642d329c650bc69d9e4227cc9 WatchSource:0}: Error finding container 37d22dfdc90a75a211ecda61a0f832de1edbbaf642d329c650bc69d9e4227cc9: Status 404 returned error can't find the container with id 37d22dfdc90a75a211ecda61a0f832de1edbbaf642d329c650bc69d9e4227cc9 Jan 27 21:53:54 crc kubenswrapper[4803]: I0127 21:53:54.105548 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nds5" event={"ID":"f28d4382-79f1-4254-a4fa-fced45178594","Type":"ContainerStarted","Data":"8b8ef75939259bc9e290d6a9a5814cc558419de61fefac728d386579340e5b68"} Jan 27 21:53:54 crc kubenswrapper[4803]: I0127 21:53:54.107580 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvxh" event={"ID":"0b16bfe1-a641-480e-aef3-9217bd7f8842","Type":"ContainerStarted","Data":"aa88b3ba9fb2f6029b80b664897c036df8ac48b6e29ecdaa2db6e5b76c839f90"} Jan 27 21:53:54 crc kubenswrapper[4803]: I0127 21:53:54.108907 4803 generic.go:334] "Generic (PLEG): container finished" podID="b99815d1-e732-429a-afb0-7e2328eb4a80" containerID="6db437d47fe8f0e6877748b0e54194c3a4e430dec6e10e01ae703c743350f23f" exitCode=0 Jan 27 21:53:54 crc kubenswrapper[4803]: I0127 21:53:54.108939 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4445" event={"ID":"b99815d1-e732-429a-afb0-7e2328eb4a80","Type":"ContainerDied","Data":"6db437d47fe8f0e6877748b0e54194c3a4e430dec6e10e01ae703c743350f23f"} Jan 27 21:53:54 crc kubenswrapper[4803]: I0127 21:53:54.108955 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4445" event={"ID":"b99815d1-e732-429a-afb0-7e2328eb4a80","Type":"ContainerStarted","Data":"37d22dfdc90a75a211ecda61a0f832de1edbbaf642d329c650bc69d9e4227cc9"} Jan 27 21:53:54 crc kubenswrapper[4803]: I0127 21:53:54.143456 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-knvxh" podStartSLOduration=1.776992422 podStartE2EDuration="4.143438557s" podCreationTimestamp="2026-01-27 21:53:50 +0000 UTC" firstStartedPulling="2026-01-27 21:53:51.085479693 +0000 UTC m=+383.501501382" lastFinishedPulling="2026-01-27 21:53:53.451925818 +0000 UTC m=+385.867947517" observedRunningTime="2026-01-27 21:53:54.143096697 +0000 UTC m=+386.559118396" watchObservedRunningTime="2026-01-27 21:53:54.143438557 +0000 UTC m=+386.559460256" Jan 27 21:53:55 crc kubenswrapper[4803]: I0127 21:53:55.121211 4803 generic.go:334] "Generic (PLEG): container finished" podID="b99815d1-e732-429a-afb0-7e2328eb4a80" containerID="45513cf634b4312942936a5552bcd8db7eb718723dd98db65ba798692f19c45b" exitCode=0 Jan 27 21:53:55 crc kubenswrapper[4803]: I0127 21:53:55.121251 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4445" event={"ID":"b99815d1-e732-429a-afb0-7e2328eb4a80","Type":"ContainerDied","Data":"45513cf634b4312942936a5552bcd8db7eb718723dd98db65ba798692f19c45b"} Jan 27 21:53:55 crc kubenswrapper[4803]: I0127 21:53:55.125202 4803 generic.go:334] "Generic (PLEG): container finished" podID="f28d4382-79f1-4254-a4fa-fced45178594" containerID="8b8ef75939259bc9e290d6a9a5814cc558419de61fefac728d386579340e5b68" exitCode=0 Jan 27 21:53:55 crc kubenswrapper[4803]: I0127 21:53:55.125236 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nds5" event={"ID":"f28d4382-79f1-4254-a4fa-fced45178594","Type":"ContainerDied","Data":"8b8ef75939259bc9e290d6a9a5814cc558419de61fefac728d386579340e5b68"} Jan 27 21:53:56 crc kubenswrapper[4803]: I0127 21:53:56.320709 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nds5" event={"ID":"f28d4382-79f1-4254-a4fa-fced45178594","Type":"ContainerStarted","Data":"d06e6be93e46765a13fa6664692c7463799cde50407a37cfe737f3841cdd2b9c"} Jan 27 21:53:56 crc kubenswrapper[4803]: I0127 21:53:56.362025 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9nds5" podStartSLOduration=2.937174669 podStartE2EDuration="5.36200482s" podCreationTimestamp="2026-01-27 21:53:51 +0000 UTC" firstStartedPulling="2026-01-27 21:53:53.098270234 +0000 UTC m=+385.514291933" lastFinishedPulling="2026-01-27 21:53:55.523100385 +0000 UTC m=+387.939122084" observedRunningTime="2026-01-27 21:53:56.360673235 +0000 UTC m=+388.776694954" watchObservedRunningTime="2026-01-27 21:53:56.36200482 +0000 UTC m=+388.778026529" Jan 27 21:53:57 crc kubenswrapper[4803]: I0127 21:53:57.330126 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4445" event={"ID":"b99815d1-e732-429a-afb0-7e2328eb4a80","Type":"ContainerStarted","Data":"aefcc1a457f81117e6f31fdcde6645a472bccf963166b93fe6882798be05e1ea"} Jan 27 21:53:57 crc kubenswrapper[4803]: I0127 21:53:57.347921 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j4445" podStartSLOduration=3.123721228 podStartE2EDuration="5.347901383s" podCreationTimestamp="2026-01-27 21:53:52 +0000 UTC" firstStartedPulling="2026-01-27 21:53:54.110326334 +0000 UTC m=+386.526348033" lastFinishedPulling="2026-01-27 21:53:56.334506489 +0000 UTC m=+388.750528188" observedRunningTime="2026-01-27 21:53:57.34558198 +0000 UTC m=+389.761603689" watchObservedRunningTime="2026-01-27 21:53:57.347901383 +0000 UTC m=+389.763923092" Jan 27 21:53:59 crc kubenswrapper[4803]: I0127 21:53:59.395936 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:59 crc kubenswrapper[4803]: I0127 21:53:59.396415 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:53:59 crc kubenswrapper[4803]: I0127 21:53:59.454527 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:54:00 crc kubenswrapper[4803]: I0127 21:54:00.386369 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 21:54:00 crc kubenswrapper[4803]: I0127 21:54:00.402889 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:54:00 crc kubenswrapper[4803]: I0127 21:54:00.402951 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:54:00 crc kubenswrapper[4803]: I0127 21:54:00.461385 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:54:01 crc kubenswrapper[4803]: I0127 21:54:01.392183 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 21:54:01 crc kubenswrapper[4803]: I0127 21:54:01.847465 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:54:01 crc kubenswrapper[4803]: I0127 21:54:01.847824 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:54:01 crc kubenswrapper[4803]: I0127 21:54:01.892679 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:54:02 crc kubenswrapper[4803]: I0127 21:54:02.393903 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9nds5" Jan 27 21:54:02 crc kubenswrapper[4803]: I0127 21:54:02.788419 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:54:02 crc kubenswrapper[4803]: I0127 21:54:02.788458 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:54:02 crc kubenswrapper[4803]: I0127 21:54:02.824399 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:54:03 crc kubenswrapper[4803]: I0127 21:54:03.395446 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 21:54:16 crc kubenswrapper[4803]: I0127 21:54:16.343489 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:54:16 crc kubenswrapper[4803]: I0127 21:54:16.344287 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:54:16 crc kubenswrapper[4803]: I0127 21:54:16.344356 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:54:16 crc kubenswrapper[4803]: I0127 21:54:16.345978 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eab3307c7662fa4415bdda98a4550f98a4f3e4518c2ba81876e66dccef2535a4"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:54:16 crc kubenswrapper[4803]: I0127 21:54:16.346129 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://eab3307c7662fa4415bdda98a4550f98a4f3e4518c2ba81876e66dccef2535a4" gracePeriod=600 Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.441290 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="eab3307c7662fa4415bdda98a4550f98a4f3e4518c2ba81876e66dccef2535a4" exitCode=0 Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.441374 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"eab3307c7662fa4415bdda98a4550f98a4f3e4518c2ba81876e66dccef2535a4"} Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.442040 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"e8efaf7b446df272e0996a17c38530d9da7be7bbc83602d505bce00b2e3d7c50"} Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.442084 4803 scope.go:117] "RemoveContainer" containerID="3e3523388441ef8e09fd867eac66df30f3e8e087ce57c2907e372b3c783905d7" Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.814885 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt"] Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.816349 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.820266 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.820490 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.821281 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.821738 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.823596 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt"] Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.824048 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.972784 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bms2d\" (UniqueName: \"kubernetes.io/projected/5b35cfba-7019-4d29-b301-d24b2878a4ff-kube-api-access-bms2d\") pod \"cluster-monitoring-operator-6d5b84845-4s5rt\" (UID: \"5b35cfba-7019-4d29-b301-d24b2878a4ff\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.973033 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5b35cfba-7019-4d29-b301-d24b2878a4ff-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-4s5rt\" (UID: \"5b35cfba-7019-4d29-b301-d24b2878a4ff\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" Jan 27 21:54:17 crc kubenswrapper[4803]: I0127 21:54:17.973323 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b35cfba-7019-4d29-b301-d24b2878a4ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-4s5rt\" (UID: \"5b35cfba-7019-4d29-b301-d24b2878a4ff\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" Jan 27 21:54:18 crc kubenswrapper[4803]: I0127 21:54:18.074962 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5b35cfba-7019-4d29-b301-d24b2878a4ff-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-4s5rt\" (UID: \"5b35cfba-7019-4d29-b301-d24b2878a4ff\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" Jan 27 21:54:18 crc kubenswrapper[4803]: I0127 21:54:18.075045 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b35cfba-7019-4d29-b301-d24b2878a4ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-4s5rt\" (UID: \"5b35cfba-7019-4d29-b301-d24b2878a4ff\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" Jan 27 21:54:18 crc kubenswrapper[4803]: I0127 21:54:18.075110 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bms2d\" (UniqueName: \"kubernetes.io/projected/5b35cfba-7019-4d29-b301-d24b2878a4ff-kube-api-access-bms2d\") pod \"cluster-monitoring-operator-6d5b84845-4s5rt\" (UID: \"5b35cfba-7019-4d29-b301-d24b2878a4ff\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" Jan 27 21:54:18 crc kubenswrapper[4803]: I0127 21:54:18.077295 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5b35cfba-7019-4d29-b301-d24b2878a4ff-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-4s5rt\" (UID: \"5b35cfba-7019-4d29-b301-d24b2878a4ff\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" Jan 27 21:54:18 crc kubenswrapper[4803]: I0127 21:54:18.086121 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b35cfba-7019-4d29-b301-d24b2878a4ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-4s5rt\" (UID: \"5b35cfba-7019-4d29-b301-d24b2878a4ff\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" Jan 27 21:54:18 crc kubenswrapper[4803]: I0127 21:54:18.096397 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bms2d\" (UniqueName: \"kubernetes.io/projected/5b35cfba-7019-4d29-b301-d24b2878a4ff-kube-api-access-bms2d\") pod \"cluster-monitoring-operator-6d5b84845-4s5rt\" (UID: \"5b35cfba-7019-4d29-b301-d24b2878a4ff\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" Jan 27 21:54:18 crc kubenswrapper[4803]: I0127 21:54:18.144570 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" Jan 27 21:54:18 crc kubenswrapper[4803]: I0127 21:54:18.582865 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt"] Jan 27 21:54:18 crc kubenswrapper[4803]: W0127 21:54:18.592136 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b35cfba_7019_4d29_b301_d24b2878a4ff.slice/crio-f05143e848e16da6bb51878a04d8ded070e17ffbbe244ab2bcb5bda3feaba7b1 WatchSource:0}: Error finding container f05143e848e16da6bb51878a04d8ded070e17ffbbe244ab2bcb5bda3feaba7b1: Status 404 returned error can't find the container with id f05143e848e16da6bb51878a04d8ded070e17ffbbe244ab2bcb5bda3feaba7b1 Jan 27 21:54:19 crc kubenswrapper[4803]: I0127 21:54:19.456323 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" event={"ID":"5b35cfba-7019-4d29-b301-d24b2878a4ff","Type":"ContainerStarted","Data":"f05143e848e16da6bb51878a04d8ded070e17ffbbe244ab2bcb5bda3feaba7b1"} Jan 27 21:54:20 crc kubenswrapper[4803]: I0127 21:54:20.461327 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" event={"ID":"5b35cfba-7019-4d29-b301-d24b2878a4ff","Type":"ContainerStarted","Data":"07535fba1be55e9f0e4178bbede262e01c8969b51e0c12e8f7c8d575bb9d5fce"} Jan 27 21:54:20 crc kubenswrapper[4803]: I0127 21:54:20.477103 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-4s5rt" podStartSLOduration=1.882724436 podStartE2EDuration="3.477080081s" podCreationTimestamp="2026-01-27 21:54:17 +0000 UTC" firstStartedPulling="2026-01-27 21:54:18.594076571 +0000 UTC m=+411.010098270" lastFinishedPulling="2026-01-27 21:54:20.188432216 +0000 UTC m=+412.604453915" observedRunningTime="2026-01-27 21:54:20.475809046 +0000 UTC m=+412.891830755" watchObservedRunningTime="2026-01-27 21:54:20.477080081 +0000 UTC m=+412.893101780" Jan 27 21:54:20 crc kubenswrapper[4803]: I0127 21:54:20.710715 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v"] Jan 27 21:54:20 crc kubenswrapper[4803]: I0127 21:54:20.711372 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" Jan 27 21:54:20 crc kubenswrapper[4803]: I0127 21:54:20.713168 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-rk72h" Jan 27 21:54:20 crc kubenswrapper[4803]: I0127 21:54:20.713421 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 27 21:54:20 crc kubenswrapper[4803]: I0127 21:54:20.720261 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v"] Jan 27 21:54:20 crc kubenswrapper[4803]: I0127 21:54:20.808602 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/620f5cd9-d7ac-436d-8d1f-66617d4fe1a3-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-hgn8v\" (UID: \"620f5cd9-d7ac-436d-8d1f-66617d4fe1a3\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" Jan 27 21:54:20 crc kubenswrapper[4803]: I0127 21:54:20.910121 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/620f5cd9-d7ac-436d-8d1f-66617d4fe1a3-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-hgn8v\" (UID: \"620f5cd9-d7ac-436d-8d1f-66617d4fe1a3\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" Jan 27 21:54:20 crc kubenswrapper[4803]: I0127 21:54:20.918135 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/620f5cd9-d7ac-436d-8d1f-66617d4fe1a3-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-hgn8v\" (UID: \"620f5cd9-d7ac-436d-8d1f-66617d4fe1a3\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" Jan 27 21:54:21 crc kubenswrapper[4803]: I0127 21:54:21.027413 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" Jan 27 21:54:21 crc kubenswrapper[4803]: I0127 21:54:21.410435 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v"] Jan 27 21:54:21 crc kubenswrapper[4803]: W0127 21:54:21.418926 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod620f5cd9_d7ac_436d_8d1f_66617d4fe1a3.slice/crio-2fec9425cd805323c5c2bfb3a9c6b15a3ad857af84b878a16291719dcb4cf423 WatchSource:0}: Error finding container 2fec9425cd805323c5c2bfb3a9c6b15a3ad857af84b878a16291719dcb4cf423: Status 404 returned error can't find the container with id 2fec9425cd805323c5c2bfb3a9c6b15a3ad857af84b878a16291719dcb4cf423 Jan 27 21:54:21 crc kubenswrapper[4803]: I0127 21:54:21.467267 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" event={"ID":"620f5cd9-d7ac-436d-8d1f-66617d4fe1a3","Type":"ContainerStarted","Data":"2fec9425cd805323c5c2bfb3a9c6b15a3ad857af84b878a16291719dcb4cf423"} Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.478654 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" event={"ID":"620f5cd9-d7ac-436d-8d1f-66617d4fe1a3","Type":"ContainerStarted","Data":"68a1735763950ee03fe69618654ce8e6975629d83b38f0c28c49523e11400654"} Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.479221 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.487817 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.498211 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podStartSLOduration=2.343723597 podStartE2EDuration="3.498189236s" podCreationTimestamp="2026-01-27 21:54:20 +0000 UTC" firstStartedPulling="2026-01-27 21:54:21.422173297 +0000 UTC m=+413.838194996" lastFinishedPulling="2026-01-27 21:54:22.576638936 +0000 UTC m=+414.992660635" observedRunningTime="2026-01-27 21:54:23.496439988 +0000 UTC m=+415.912461687" watchObservedRunningTime="2026-01-27 21:54:23.498189236 +0000 UTC m=+415.914210975" Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.789391 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-k7q2v"] Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.790879 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.796367 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-k7q2v"] Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.797129 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.797396 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.797727 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.797914 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-lg4mt" Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.959692 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.959725 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ts6p\" (UniqueName: \"kubernetes.io/projected/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-kube-api-access-7ts6p\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.959821 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-metrics-client-ca\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:23 crc kubenswrapper[4803]: I0127 21:54:23.959958 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:24 crc kubenswrapper[4803]: I0127 21:54:24.060752 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:24 crc kubenswrapper[4803]: I0127 21:54:24.060881 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:24 crc kubenswrapper[4803]: I0127 21:54:24.060905 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ts6p\" (UniqueName: \"kubernetes.io/projected/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-kube-api-access-7ts6p\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:24 crc kubenswrapper[4803]: I0127 21:54:24.060967 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-metrics-client-ca\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:24 crc kubenswrapper[4803]: I0127 21:54:24.062147 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-metrics-client-ca\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:24 crc kubenswrapper[4803]: I0127 21:54:24.067658 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:24 crc kubenswrapper[4803]: I0127 21:54:24.067837 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:24 crc kubenswrapper[4803]: I0127 21:54:24.086126 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ts6p\" (UniqueName: \"kubernetes.io/projected/ded8c137-0d76-47c7-a2e6-d7d3804dff0e-kube-api-access-7ts6p\") pod \"prometheus-operator-db54df47d-k7q2v\" (UID: \"ded8c137-0d76-47c7-a2e6-d7d3804dff0e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:24 crc kubenswrapper[4803]: I0127 21:54:24.120178 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" Jan 27 21:54:24 crc kubenswrapper[4803]: I0127 21:54:24.611626 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-k7q2v"] Jan 27 21:54:24 crc kubenswrapper[4803]: W0127 21:54:24.623220 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podded8c137_0d76_47c7_a2e6_d7d3804dff0e.slice/crio-a29fff6c17209efadfc51f4d6230aa7bb4d63340e772d6001f33eb16568555e8 WatchSource:0}: Error finding container a29fff6c17209efadfc51f4d6230aa7bb4d63340e772d6001f33eb16568555e8: Status 404 returned error can't find the container with id a29fff6c17209efadfc51f4d6230aa7bb4d63340e772d6001f33eb16568555e8 Jan 27 21:54:25 crc kubenswrapper[4803]: I0127 21:54:25.492300 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" event={"ID":"ded8c137-0d76-47c7-a2e6-d7d3804dff0e","Type":"ContainerStarted","Data":"a29fff6c17209efadfc51f4d6230aa7bb4d63340e772d6001f33eb16568555e8"} Jan 27 21:54:26 crc kubenswrapper[4803]: I0127 21:54:26.499023 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" event={"ID":"ded8c137-0d76-47c7-a2e6-d7d3804dff0e","Type":"ContainerStarted","Data":"bf249e48e2a680afcf8d6070d950bb3832dae388cd3c6d11b0825872af8ec07b"} Jan 27 21:54:26 crc kubenswrapper[4803]: I0127 21:54:26.499376 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" event={"ID":"ded8c137-0d76-47c7-a2e6-d7d3804dff0e","Type":"ContainerStarted","Data":"8a5f5f119b326910133ed7af9a34ebbdd7046d7bdcdfc9eea7065742a41d9008"} Jan 27 21:54:26 crc kubenswrapper[4803]: I0127 21:54:26.520503 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-k7q2v" podStartSLOduration=2.139309764 podStartE2EDuration="3.520481423s" podCreationTimestamp="2026-01-27 21:54:23 +0000 UTC" firstStartedPulling="2026-01-27 21:54:24.626741744 +0000 UTC m=+417.042763443" lastFinishedPulling="2026-01-27 21:54:26.007913393 +0000 UTC m=+418.423935102" observedRunningTime="2026-01-27 21:54:26.51557329 +0000 UTC m=+418.931594989" watchObservedRunningTime="2026-01-27 21:54:26.520481423 +0000 UTC m=+418.936503142" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.121556 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-shq9j"] Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.122560 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.124863 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.124987 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-thhqx" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.135867 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.148416 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-thckw"] Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.150283 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.151914 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.152834 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-nfqs2" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.154289 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.156345 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9"] Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.162657 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.167906 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9"] Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.168663 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.169143 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.169461 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.169776 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-vfvgc" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.183626 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-shq9j"] Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.227939 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9j2f\" (UniqueName: \"kubernetes.io/projected/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-kube-api-access-f9j2f\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.227983 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slkjb\" (UniqueName: \"kubernetes.io/projected/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-kube-api-access-slkjb\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228018 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-tls\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228052 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228074 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228099 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228118 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-sys\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228139 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-textfile\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228175 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228194 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-root\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228210 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/010ebda8-11ce-4c48-8f53-d3331ad94530-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228226 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228256 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjgt2\" (UniqueName: \"kubernetes.io/projected/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-api-access-xjgt2\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228277 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228299 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-metrics-client-ca\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228319 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228342 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-wtmp\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.228356 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/010ebda8-11ce-4c48-8f53-d3331ad94530-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329472 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329526 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329559 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-sys\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329592 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-textfile\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329613 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329637 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-root\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329652 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/010ebda8-11ce-4c48-8f53-d3331ad94530-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329669 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329685 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjgt2\" (UniqueName: \"kubernetes.io/projected/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-api-access-xjgt2\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329707 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329729 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-metrics-client-ca\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329747 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329761 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-wtmp\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329774 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/010ebda8-11ce-4c48-8f53-d3331ad94530-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329804 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9j2f\" (UniqueName: \"kubernetes.io/projected/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-kube-api-access-f9j2f\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329823 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slkjb\" (UniqueName: \"kubernetes.io/projected/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-kube-api-access-slkjb\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329867 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-tls\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.329886 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.330050 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-sys\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.330477 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-textfile\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.330658 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-wtmp\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.330828 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.330963 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-root\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.331341 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/010ebda8-11ce-4c48-8f53-d3331ad94530-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.331425 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/010ebda8-11ce-4c48-8f53-d3331ad94530-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.331801 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-metrics-client-ca\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.334440 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.335880 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.336132 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.336342 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.336469 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.336587 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.336687 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 27 21:54:28 crc kubenswrapper[4803]: E0127 21:54:28.341146 4803 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Jan 27 21:54:28 crc kubenswrapper[4803]: E0127 21:54:28.341214 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-openshift-state-metrics-tls podName:97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb nodeName:}" failed. No retries permitted until 2026-01-27 21:54:28.841197058 +0000 UTC m=+421.257218757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-shq9j" (UID: "97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb") : secret "openshift-state-metrics-tls" not found Jan 27 21:54:28 crc kubenswrapper[4803]: E0127 21:54:28.341388 4803 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Jan 27 21:54:28 crc kubenswrapper[4803]: E0127 21:54:28.341516 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-tls podName:fb1a54f1-5bc0-49b1-b200-6088358ce2e8 nodeName:}" failed. No retries permitted until 2026-01-27 21:54:28.841494856 +0000 UTC m=+421.257516555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-tls") pod "node-exporter-thckw" (UID: "fb1a54f1-5bc0-49b1-b200-6088358ce2e8") : secret "node-exporter-tls" not found Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.343194 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.350298 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.354257 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.354921 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjgt2\" (UniqueName: \"kubernetes.io/projected/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-api-access-xjgt2\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.357982 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9j2f\" (UniqueName: \"kubernetes.io/projected/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-kube-api-access-f9j2f\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.360378 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slkjb\" (UniqueName: \"kubernetes.io/projected/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-kube-api-access-slkjb\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.360429 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.360805 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/010ebda8-11ce-4c48-8f53-d3331ad94530-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-4c9h9\" (UID: \"010ebda8-11ce-4c48-8f53-d3331ad94530\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.480133 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-vfvgc" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.492118 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.939666 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-tls\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.940050 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.945422 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/fb1a54f1-5bc0-49b1-b200-6088358ce2e8-node-exporter-tls\") pod \"node-exporter-thckw\" (UID: \"fb1a54f1-5bc0-49b1-b200-6088358ce2e8\") " pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.945434 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-shq9j\" (UID: \"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:28 crc kubenswrapper[4803]: W0127 21:54:28.947098 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod010ebda8_11ce_4c48_8f53_d3331ad94530.slice/crio-99e621161cab4ba6f6f84d162bff90429cb4d3e3d98b94644eea00ddc21867ad WatchSource:0}: Error finding container 99e621161cab4ba6f6f84d162bff90429cb4d3e3d98b94644eea00ddc21867ad: Status 404 returned error can't find the container with id 99e621161cab4ba6f6f84d162bff90429cb4d3e3d98b94644eea00ddc21867ad Jan 27 21:54:28 crc kubenswrapper[4803]: I0127 21:54:28.958443 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9"] Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.039339 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-thhqx" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.048021 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.066547 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-nfqs2" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.074802 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-thckw" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.204879 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.206505 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.212394 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.212453 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.212563 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.213005 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-9mgr2" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.213438 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.214621 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.214735 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.214889 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.220293 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.244566 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.344337 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.344386 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bd866823-92fe-4aef-abe7-d7ecc8da30f7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.344410 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bd866823-92fe-4aef-abe7-d7ecc8da30f7-config-out\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.344458 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-config-volume\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.344492 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l9ck\" (UniqueName: \"kubernetes.io/projected/bd866823-92fe-4aef-abe7-d7ecc8da30f7-kube-api-access-7l9ck\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.344556 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd866823-92fe-4aef-abe7-d7ecc8da30f7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.344599 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bd866823-92fe-4aef-abe7-d7ecc8da30f7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.344631 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.344671 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/bd866823-92fe-4aef-abe7-d7ecc8da30f7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.347145 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.347229 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-web-config\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.347322 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.448506 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-config-volume\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.448597 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l9ck\" (UniqueName: \"kubernetes.io/projected/bd866823-92fe-4aef-abe7-d7ecc8da30f7-kube-api-access-7l9ck\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.448675 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd866823-92fe-4aef-abe7-d7ecc8da30f7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.448704 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bd866823-92fe-4aef-abe7-d7ecc8da30f7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.448732 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.448794 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/bd866823-92fe-4aef-abe7-d7ecc8da30f7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.448838 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.448890 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-web-config\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.448928 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.448958 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.448984 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bd866823-92fe-4aef-abe7-d7ecc8da30f7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.449009 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bd866823-92fe-4aef-abe7-d7ecc8da30f7-config-out\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.450047 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/bd866823-92fe-4aef-abe7-d7ecc8da30f7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.450691 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bd866823-92fe-4aef-abe7-d7ecc8da30f7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.451010 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd866823-92fe-4aef-abe7-d7ecc8da30f7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.454344 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bd866823-92fe-4aef-abe7-d7ecc8da30f7-config-out\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.454356 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-config-volume\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.454799 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bd866823-92fe-4aef-abe7-d7ecc8da30f7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.455200 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.455245 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.456604 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.457167 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-web-config\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.462618 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bd866823-92fe-4aef-abe7-d7ecc8da30f7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.466956 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l9ck\" (UniqueName: \"kubernetes.io/projected/bd866823-92fe-4aef-abe7-d7ecc8da30f7-kube-api-access-7l9ck\") pod \"alertmanager-main-0\" (UID: \"bd866823-92fe-4aef-abe7-d7ecc8da30f7\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.517607 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-shq9j"] Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.518153 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" event={"ID":"010ebda8-11ce-4c48-8f53-d3331ad94530","Type":"ContainerStarted","Data":"99e621161cab4ba6f6f84d162bff90429cb4d3e3d98b94644eea00ddc21867ad"} Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.519314 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-thckw" event={"ID":"fb1a54f1-5bc0-49b1-b200-6088358ce2e8","Type":"ContainerStarted","Data":"288671eb28fb418a57f8a66a1b96baff5e84a2b5db2cf9c1817c79d9d1324ef1"} Jan 27 21:54:29 crc kubenswrapper[4803]: W0127 21:54:29.526404 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97f9fa82_14dc_43a1_b8c6_d3d5fb12daeb.slice/crio-168a6d6843d8aa822ec6fb322904b7815469443d0a3c48016cbea7b4aaef3cfd WatchSource:0}: Error finding container 168a6d6843d8aa822ec6fb322904b7815469443d0a3c48016cbea7b4aaef3cfd: Status 404 returned error can't find the container with id 168a6d6843d8aa822ec6fb322904b7815469443d0a3c48016cbea7b4aaef3cfd Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.543942 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 27 21:54:29 crc kubenswrapper[4803]: I0127 21:54:29.939493 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.206339 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-7fd45b674-f8ngk"] Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.210211 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.219771 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.220073 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.220109 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-ellmnqqoqbkn" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.222058 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.223348 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-pbdq2" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.238709 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.238950 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.246008 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7fd45b674-f8ngk"] Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.364827 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-tls\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.364887 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rngn5\" (UniqueName: \"kubernetes.io/projected/f118d287-ae55-421d-9b9a-050b79b6692b-kube-api-access-rngn5\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.364912 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f118d287-ae55-421d-9b9a-050b79b6692b-metrics-client-ca\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.365305 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.365333 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.365370 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.365386 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-grpc-tls\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.365428 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.467301 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.467380 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-tls\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.467418 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rngn5\" (UniqueName: \"kubernetes.io/projected/f118d287-ae55-421d-9b9a-050b79b6692b-kube-api-access-rngn5\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.467437 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f118d287-ae55-421d-9b9a-050b79b6692b-metrics-client-ca\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.467476 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.467500 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.467532 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.467552 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-grpc-tls\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.469985 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f118d287-ae55-421d-9b9a-050b79b6692b-metrics-client-ca\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.473401 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.473485 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.473664 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.474185 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.474528 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-thanos-querier-tls\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.474901 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f118d287-ae55-421d-9b9a-050b79b6692b-secret-grpc-tls\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.487359 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rngn5\" (UniqueName: \"kubernetes.io/projected/f118d287-ae55-421d-9b9a-050b79b6692b-kube-api-access-rngn5\") pod \"thanos-querier-7fd45b674-f8ngk\" (UID: \"f118d287-ae55-421d-9b9a-050b79b6692b\") " pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.529173 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" event={"ID":"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb","Type":"ContainerStarted","Data":"b3a070a7c08af8c869da1fea66272799bff33a574937f9517377913552369687"} Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.529213 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" event={"ID":"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb","Type":"ContainerStarted","Data":"d1bd2899655b23ae86d2836d331090501742f64d538e0fc0229da73552755807"} Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.529223 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" event={"ID":"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb","Type":"ContainerStarted","Data":"168a6d6843d8aa822ec6fb322904b7815469443d0a3c48016cbea7b4aaef3cfd"} Jan 27 21:54:30 crc kubenswrapper[4803]: I0127 21:54:30.532638 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:30 crc kubenswrapper[4803]: W0127 21:54:30.598699 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd866823_92fe_4aef_abe7_d7ecc8da30f7.slice/crio-c6420c1cddba9cef68d9bfe2af23f01e26418c0876149a871a9ab05f5e172948 WatchSource:0}: Error finding container c6420c1cddba9cef68d9bfe2af23f01e26418c0876149a871a9ab05f5e172948: Status 404 returned error can't find the container with id c6420c1cddba9cef68d9bfe2af23f01e26418c0876149a871a9ab05f5e172948 Jan 27 21:54:31 crc kubenswrapper[4803]: I0127 21:54:31.027399 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7fd45b674-f8ngk"] Jan 27 21:54:31 crc kubenswrapper[4803]: W0127 21:54:31.038429 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf118d287_ae55_421d_9b9a_050b79b6692b.slice/crio-a06525e20aa991126aed658c0f2bc2e37c81697948ef83a15c13504682e8b18e WatchSource:0}: Error finding container a06525e20aa991126aed658c0f2bc2e37c81697948ef83a15c13504682e8b18e: Status 404 returned error can't find the container with id a06525e20aa991126aed658c0f2bc2e37c81697948ef83a15c13504682e8b18e Jan 27 21:54:31 crc kubenswrapper[4803]: I0127 21:54:31.539175 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" event={"ID":"010ebda8-11ce-4c48-8f53-d3331ad94530","Type":"ContainerStarted","Data":"be1343760ad3449464621563c454e4fa61458b6c18485634b6be67ce2f917a15"} Jan 27 21:54:31 crc kubenswrapper[4803]: I0127 21:54:31.539220 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" event={"ID":"010ebda8-11ce-4c48-8f53-d3331ad94530","Type":"ContainerStarted","Data":"d5b0176cbd67c264e61a4ae1e85bb5867461eaa3011c6ad34bbcdd39c33871ff"} Jan 27 21:54:31 crc kubenswrapper[4803]: I0127 21:54:31.539233 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" event={"ID":"010ebda8-11ce-4c48-8f53-d3331ad94530","Type":"ContainerStarted","Data":"afeded2fcbce5f82935b4cb171e9bafbad51507bdbbe121e1587fb337dc95908"} Jan 27 21:54:31 crc kubenswrapper[4803]: I0127 21:54:31.540278 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" event={"ID":"f118d287-ae55-421d-9b9a-050b79b6692b","Type":"ContainerStarted","Data":"a06525e20aa991126aed658c0f2bc2e37c81697948ef83a15c13504682e8b18e"} Jan 27 21:54:31 crc kubenswrapper[4803]: I0127 21:54:31.542064 4803 generic.go:334] "Generic (PLEG): container finished" podID="fb1a54f1-5bc0-49b1-b200-6088358ce2e8" containerID="f10fbb0f75553523b0640bb8ba7041b91ca69b6294b07034b44218dd0e5fe70f" exitCode=0 Jan 27 21:54:31 crc kubenswrapper[4803]: I0127 21:54:31.542165 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-thckw" event={"ID":"fb1a54f1-5bc0-49b1-b200-6088358ce2e8","Type":"ContainerDied","Data":"f10fbb0f75553523b0640bb8ba7041b91ca69b6294b07034b44218dd0e5fe70f"} Jan 27 21:54:31 crc kubenswrapper[4803]: I0127 21:54:31.543961 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd866823-92fe-4aef-abe7-d7ecc8da30f7","Type":"ContainerStarted","Data":"c6420c1cddba9cef68d9bfe2af23f01e26418c0876149a871a9ab05f5e172948"} Jan 27 21:54:31 crc kubenswrapper[4803]: I0127 21:54:31.581310 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4c9h9" podStartSLOduration=1.891052779 podStartE2EDuration="3.581289335s" podCreationTimestamp="2026-01-27 21:54:28 +0000 UTC" firstStartedPulling="2026-01-27 21:54:28.94922094 +0000 UTC m=+421.365242649" lastFinishedPulling="2026-01-27 21:54:30.639457506 +0000 UTC m=+423.055479205" observedRunningTime="2026-01-27 21:54:31.563631967 +0000 UTC m=+423.979653696" watchObservedRunningTime="2026-01-27 21:54:31.581289335 +0000 UTC m=+423.997311034" Jan 27 21:54:32 crc kubenswrapper[4803]: I0127 21:54:32.550893 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-thckw" event={"ID":"fb1a54f1-5bc0-49b1-b200-6088358ce2e8","Type":"ContainerStarted","Data":"cf963dd01cf94055954d4afe7bc99e2a4d65b877048a95ac30fd3f514f7ee401"} Jan 27 21:54:32 crc kubenswrapper[4803]: I0127 21:54:32.551242 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-thckw" event={"ID":"fb1a54f1-5bc0-49b1-b200-6088358ce2e8","Type":"ContainerStarted","Data":"f4718f52bf0d9116e02f1e2fd67995b5a6dc78e38c97142c221e1bec757233a5"} Jan 27 21:54:32 crc kubenswrapper[4803]: I0127 21:54:32.554524 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" event={"ID":"97f9fa82-14dc-43a1-b8c6-d3d5fb12daeb","Type":"ContainerStarted","Data":"4d76dd46de83385db22daafadf866f388de9d685362d1eb4ffc7a8d8867b6300"} Jan 27 21:54:32 crc kubenswrapper[4803]: I0127 21:54:32.574779 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-thckw" podStartSLOduration=3.042195138 podStartE2EDuration="4.57475872s" podCreationTimestamp="2026-01-27 21:54:28 +0000 UTC" firstStartedPulling="2026-01-27 21:54:29.109525595 +0000 UTC m=+421.525547294" lastFinishedPulling="2026-01-27 21:54:30.642089177 +0000 UTC m=+423.058110876" observedRunningTime="2026-01-27 21:54:32.569724143 +0000 UTC m=+424.985745842" watchObservedRunningTime="2026-01-27 21:54:32.57475872 +0000 UTC m=+424.990780419" Jan 27 21:54:32 crc kubenswrapper[4803]: I0127 21:54:32.590076 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-shq9j" podStartSLOduration=2.846968919 podStartE2EDuration="4.590054614s" podCreationTimestamp="2026-01-27 21:54:28 +0000 UTC" firstStartedPulling="2026-01-27 21:54:29.798723472 +0000 UTC m=+422.214745171" lastFinishedPulling="2026-01-27 21:54:31.541809167 +0000 UTC m=+423.957830866" observedRunningTime="2026-01-27 21:54:32.587535866 +0000 UTC m=+425.003557565" watchObservedRunningTime="2026-01-27 21:54:32.590054614 +0000 UTC m=+425.006076313" Jan 27 21:54:32 crc kubenswrapper[4803]: I0127 21:54:32.936879 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cbf967b4c-wq5fg"] Jan 27 21:54:32 crc kubenswrapper[4803]: I0127 21:54:32.937511 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:32 crc kubenswrapper[4803]: I0127 21:54:32.953655 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cbf967b4c-wq5fg"] Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.013445 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-config\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.013502 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th77j\" (UniqueName: \"kubernetes.io/projected/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-kube-api-access-th77j\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.013535 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-oauth-config\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.013570 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-serving-cert\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.013588 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-trusted-ca-bundle\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.013609 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-oauth-serving-cert\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.013623 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-service-ca\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.114279 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-serving-cert\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.114316 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-trusted-ca-bundle\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.114342 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-service-ca\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.114358 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-oauth-serving-cert\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.114378 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-config\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.114413 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th77j\" (UniqueName: \"kubernetes.io/projected/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-kube-api-access-th77j\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.114446 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-oauth-config\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.115291 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-oauth-serving-cert\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.115438 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-service-ca\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.115442 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-config\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.115681 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-trusted-ca-bundle\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.117931 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-serving-cert\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.122623 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-oauth-config\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.130464 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th77j\" (UniqueName: \"kubernetes.io/projected/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-kube-api-access-th77j\") pod \"console-7cbf967b4c-wq5fg\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.251389 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.460026 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-5dc8cc774c-42hcg"] Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.461232 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.468365 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.468450 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.468589 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.469011 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-edeinm9ck83em" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.469699 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.469758 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hbbmk" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.472925 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5dc8cc774c-42hcg"] Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.549450 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk2xw\" (UniqueName: \"kubernetes.io/projected/f978ff10-12ad-4883-98d9-7ce831fad147-kube-api-access-qk2xw\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.549499 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f978ff10-12ad-4883-98d9-7ce831fad147-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.549542 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f978ff10-12ad-4883-98d9-7ce831fad147-client-ca-bundle\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.549594 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f978ff10-12ad-4883-98d9-7ce831fad147-secret-metrics-client-certs\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.549627 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f978ff10-12ad-4883-98d9-7ce831fad147-audit-log\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.549838 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f978ff10-12ad-4883-98d9-7ce831fad147-metrics-server-audit-profiles\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.549946 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f978ff10-12ad-4883-98d9-7ce831fad147-secret-metrics-server-tls\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.560698 4803 generic.go:334] "Generic (PLEG): container finished" podID="bd866823-92fe-4aef-abe7-d7ecc8da30f7" containerID="3a7453b482f6fe2752ca2eacaca631ff21881054ebcbba37e47e107f05d4f975" exitCode=0 Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.560765 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd866823-92fe-4aef-abe7-d7ecc8da30f7","Type":"ContainerDied","Data":"3a7453b482f6fe2752ca2eacaca631ff21881054ebcbba37e47e107f05d4f975"} Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.563934 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" event={"ID":"f118d287-ae55-421d-9b9a-050b79b6692b","Type":"ContainerStarted","Data":"61644636e5af6021570309fea57e54527f7f82380d20af83334e382b20ca76e5"} Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.564319 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" event={"ID":"f118d287-ae55-421d-9b9a-050b79b6692b","Type":"ContainerStarted","Data":"59775cb5dc5600d24bc7f6efc9432a98310d4583f1df24101a0c7fcfe76133e0"} Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.564336 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" event={"ID":"f118d287-ae55-421d-9b9a-050b79b6692b","Type":"ContainerStarted","Data":"43c5576fa719fe60f78d63e377b1d97b73aaf0d7f8295b2a31dd891f5791f03a"} Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.624492 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cbf967b4c-wq5fg"] Jan 27 21:54:33 crc kubenswrapper[4803]: W0127 21:54:33.628194 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd997814_9e1e_40b8_9ae5_725aa96ce1ce.slice/crio-b16f8f01560708f4d98521c5332ac8e3e341b04e3b4582e06b98999ec360a3ab WatchSource:0}: Error finding container b16f8f01560708f4d98521c5332ac8e3e341b04e3b4582e06b98999ec360a3ab: Status 404 returned error can't find the container with id b16f8f01560708f4d98521c5332ac8e3e341b04e3b4582e06b98999ec360a3ab Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.651590 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk2xw\" (UniqueName: \"kubernetes.io/projected/f978ff10-12ad-4883-98d9-7ce831fad147-kube-api-access-qk2xw\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.651642 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f978ff10-12ad-4883-98d9-7ce831fad147-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.651676 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f978ff10-12ad-4883-98d9-7ce831fad147-client-ca-bundle\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.651722 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f978ff10-12ad-4883-98d9-7ce831fad147-secret-metrics-client-certs\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.651760 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f978ff10-12ad-4883-98d9-7ce831fad147-audit-log\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.651873 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f978ff10-12ad-4883-98d9-7ce831fad147-metrics-server-audit-profiles\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.651916 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f978ff10-12ad-4883-98d9-7ce831fad147-secret-metrics-server-tls\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.652691 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f978ff10-12ad-4883-98d9-7ce831fad147-audit-log\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.652823 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f978ff10-12ad-4883-98d9-7ce831fad147-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.653331 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f978ff10-12ad-4883-98d9-7ce831fad147-metrics-server-audit-profiles\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.656458 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f978ff10-12ad-4883-98d9-7ce831fad147-secret-metrics-client-certs\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.657722 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f978ff10-12ad-4883-98d9-7ce831fad147-secret-metrics-server-tls\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.657810 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f978ff10-12ad-4883-98d9-7ce831fad147-client-ca-bundle\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.669576 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk2xw\" (UniqueName: \"kubernetes.io/projected/f978ff10-12ad-4883-98d9-7ce831fad147-kube-api-access-qk2xw\") pod \"metrics-server-5dc8cc774c-42hcg\" (UID: \"f978ff10-12ad-4883-98d9-7ce831fad147\") " pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.783547 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.919323 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5"] Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.920200 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.921800 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.922098 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 27 21:54:33 crc kubenswrapper[4803]: I0127 21:54:33.932991 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5"] Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.056098 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/354a68b0-46f4-4cae-afbe-c5ef5fba4bdf-monitoring-plugin-cert\") pod \"monitoring-plugin-8d685d9cc-c64j5\" (UID: \"354a68b0-46f4-4cae-afbe-c5ef5fba4bdf\") " pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.157688 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/354a68b0-46f4-4cae-afbe-c5ef5fba4bdf-monitoring-plugin-cert\") pod \"monitoring-plugin-8d685d9cc-c64j5\" (UID: \"354a68b0-46f4-4cae-afbe-c5ef5fba4bdf\") " pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.180179 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/354a68b0-46f4-4cae-afbe-c5ef5fba4bdf-monitoring-plugin-cert\") pod \"monitoring-plugin-8d685d9cc-c64j5\" (UID: \"354a68b0-46f4-4cae-afbe-c5ef5fba4bdf\") " pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.203457 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5dc8cc774c-42hcg"] Jan 27 21:54:34 crc kubenswrapper[4803]: W0127 21:54:34.212716 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf978ff10_12ad_4883_98d9_7ce831fad147.slice/crio-3034132abd6b1bb6cfc4c451cc94a8f8e562e389512ba9ef31f47b89b913d8b7 WatchSource:0}: Error finding container 3034132abd6b1bb6cfc4c451cc94a8f8e562e389512ba9ef31f47b89b913d8b7: Status 404 returned error can't find the container with id 3034132abd6b1bb6cfc4c451cc94a8f8e562e389512ba9ef31f47b89b913d8b7 Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.240857 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.434065 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.436559 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.454244 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-42g4k" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.454498 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.454660 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.454785 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.455131 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-dh2jp2437skr0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.455307 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.455626 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.461004 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.461233 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.461376 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.463134 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.466528 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.466924 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.469609 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563293 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563332 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563390 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7e1a6ace-a129-49c9-a417-8e3cff536f8f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563425 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563452 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-web-config\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563479 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563502 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkmbk\" (UniqueName: \"kubernetes.io/projected/7e1a6ace-a129-49c9-a417-8e3cff536f8f-kube-api-access-lkmbk\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563517 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563536 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563554 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563578 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-config\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563599 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7e1a6ace-a129-49c9-a417-8e3cff536f8f-config-out\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563619 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563635 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563651 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563665 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563683 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7e1a6ace-a129-49c9-a417-8e3cff536f8f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.563696 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.585326 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" event={"ID":"f978ff10-12ad-4883-98d9-7ce831fad147","Type":"ContainerStarted","Data":"3034132abd6b1bb6cfc4c451cc94a8f8e562e389512ba9ef31f47b89b913d8b7"} Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.588504 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cbf967b4c-wq5fg" event={"ID":"dd997814-9e1e-40b8-9ae5-725aa96ce1ce","Type":"ContainerStarted","Data":"367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868"} Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.588532 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cbf967b4c-wq5fg" event={"ID":"dd997814-9e1e-40b8-9ae5-725aa96ce1ce","Type":"ContainerStarted","Data":"b16f8f01560708f4d98521c5332ac8e3e341b04e3b4582e06b98999ec360a3ab"} Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.614695 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cbf967b4c-wq5fg" podStartSLOduration=2.614675122 podStartE2EDuration="2.614675122s" podCreationTimestamp="2026-01-27 21:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:54:34.606400178 +0000 UTC m=+427.022421877" watchObservedRunningTime="2026-01-27 21:54:34.614675122 +0000 UTC m=+427.030696841" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.648619 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5"] Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.666804 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-config\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.666875 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7e1a6ace-a129-49c9-a417-8e3cff536f8f-config-out\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.666911 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.666927 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.666946 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.666961 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.667109 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7e1a6ace-a129-49c9-a417-8e3cff536f8f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.667611 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.667687 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.667704 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.667726 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7e1a6ace-a129-49c9-a417-8e3cff536f8f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.667764 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.667977 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-web-config\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.668053 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.668118 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkmbk\" (UniqueName: \"kubernetes.io/projected/7e1a6ace-a129-49c9-a417-8e3cff536f8f-kube-api-access-lkmbk\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.668158 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.668196 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.668244 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.668253 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.670087 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7e1a6ace-a129-49c9-a417-8e3cff536f8f-config-out\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.670729 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.671099 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.671395 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.671719 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7e1a6ace-a129-49c9-a417-8e3cff536f8f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.671894 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7e1a6ace-a129-49c9-a417-8e3cff536f8f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.673436 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.673495 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-web-config\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.673950 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.674104 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.674913 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7e1a6ace-a129-49c9-a417-8e3cff536f8f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.675493 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.675825 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-config\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.676317 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.676941 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.678568 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7e1a6ace-a129-49c9-a417-8e3cff536f8f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.691337 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkmbk\" (UniqueName: \"kubernetes.io/projected/7e1a6ace-a129-49c9-a417-8e3cff536f8f-kube-api-access-lkmbk\") pod \"prometheus-k8s-0\" (UID: \"7e1a6ace-a129-49c9-a417-8e3cff536f8f\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:34 crc kubenswrapper[4803]: I0127 21:54:34.764711 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:35 crc kubenswrapper[4803]: I0127 21:54:35.197677 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 21:54:35 crc kubenswrapper[4803]: W0127 21:54:35.478109 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e1a6ace_a129_49c9_a417_8e3cff536f8f.slice/crio-d06497131758a727021bfd021470e09110d86e40fb9d20a3b8a92f03ec176333 WatchSource:0}: Error finding container d06497131758a727021bfd021470e09110d86e40fb9d20a3b8a92f03ec176333: Status 404 returned error can't find the container with id d06497131758a727021bfd021470e09110d86e40fb9d20a3b8a92f03ec176333 Jan 27 21:54:35 crc kubenswrapper[4803]: I0127 21:54:35.594590 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" event={"ID":"354a68b0-46f4-4cae-afbe-c5ef5fba4bdf","Type":"ContainerStarted","Data":"4b7b58dae779b616b33ec40e914fad6275d7f74070fe33a2c6f2c3a43c5f2ac0"} Jan 27 21:54:35 crc kubenswrapper[4803]: I0127 21:54:35.595822 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7e1a6ace-a129-49c9-a417-8e3cff536f8f","Type":"ContainerStarted","Data":"d06497131758a727021bfd021470e09110d86e40fb9d20a3b8a92f03ec176333"} Jan 27 21:54:35 crc kubenswrapper[4803]: I0127 21:54:35.598341 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" event={"ID":"f118d287-ae55-421d-9b9a-050b79b6692b","Type":"ContainerStarted","Data":"590eedc696e04992c51e2bfbb8a5bec90ea8c56c635c596a7e9bf5d2de6d8d97"} Jan 27 21:54:35 crc kubenswrapper[4803]: I0127 21:54:35.598383 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" event={"ID":"f118d287-ae55-421d-9b9a-050b79b6692b","Type":"ContainerStarted","Data":"543d536ecdf97c31544b8b23ab3202989d2e69148158a89093e9f403224b7238"} Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.606778 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" event={"ID":"f978ff10-12ad-4883-98d9-7ce831fad147","Type":"ContainerStarted","Data":"01d358f5c285efb0d85a58dc84fe3ddf3c305b211f25861b4e7f911bf4fbca0f"} Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.609001 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" event={"ID":"354a68b0-46f4-4cae-afbe-c5ef5fba4bdf","Type":"ContainerStarted","Data":"2484c0495f14ca00acab2fbc636d3d1bf01e8b9a2477445175ef6513a3c1804d"} Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.609295 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.612697 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd866823-92fe-4aef-abe7-d7ecc8da30f7","Type":"ContainerStarted","Data":"d0151d194895d29a24ac542ca32fb3a5a873edb0cc7f8d2efe352fcaa0b997e4"} Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.612742 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd866823-92fe-4aef-abe7-d7ecc8da30f7","Type":"ContainerStarted","Data":"225dacf9a159afeb37881a544d213a57903d0ad957e3d868f7af0c3c5f54416d"} Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.612763 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd866823-92fe-4aef-abe7-d7ecc8da30f7","Type":"ContainerStarted","Data":"fcc03801ef235482d2b3b0f70f413ecd3f4ce84c35c695d99f43d7dec2860614"} Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.615065 4803 generic.go:334] "Generic (PLEG): container finished" podID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerID="d92ca97924e3b1213a3107656e0be489ec9dc52c6cc26538a5ac01b8756dd9a4" exitCode=0 Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.615177 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7e1a6ace-a129-49c9-a417-8e3cff536f8f","Type":"ContainerDied","Data":"d92ca97924e3b1213a3107656e0be489ec9dc52c6cc26538a5ac01b8756dd9a4"} Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.616101 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.619568 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" event={"ID":"f118d287-ae55-421d-9b9a-050b79b6692b","Type":"ContainerStarted","Data":"9e868d0c552a058919299faa38a751f42ad76c67a8c0b576a2fb23b74d9bca83"} Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.620081 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.626534 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" podStartSLOduration=1.952767284 podStartE2EDuration="3.626481824s" podCreationTimestamp="2026-01-27 21:54:33 +0000 UTC" firstStartedPulling="2026-01-27 21:54:34.21630916 +0000 UTC m=+426.632330859" lastFinishedPulling="2026-01-27 21:54:35.8900237 +0000 UTC m=+428.306045399" observedRunningTime="2026-01-27 21:54:36.622903448 +0000 UTC m=+429.038925187" watchObservedRunningTime="2026-01-27 21:54:36.626481824 +0000 UTC m=+429.042503563" Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.677976 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" podStartSLOduration=2.9544859089999997 podStartE2EDuration="6.677958477s" podCreationTimestamp="2026-01-27 21:54:30 +0000 UTC" firstStartedPulling="2026-01-27 21:54:31.041774616 +0000 UTC m=+423.457796315" lastFinishedPulling="2026-01-27 21:54:34.765247174 +0000 UTC m=+427.181268883" observedRunningTime="2026-01-27 21:54:36.677616948 +0000 UTC m=+429.093638677" watchObservedRunningTime="2026-01-27 21:54:36.677958477 +0000 UTC m=+429.093980176" Jan 27 21:54:36 crc kubenswrapper[4803]: I0127 21:54:36.704050 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" podStartSLOduration=2.083524961 podStartE2EDuration="3.704026871s" podCreationTimestamp="2026-01-27 21:54:33 +0000 UTC" firstStartedPulling="2026-01-27 21:54:34.725909 +0000 UTC m=+427.141930699" lastFinishedPulling="2026-01-27 21:54:36.34641091 +0000 UTC m=+428.762432609" observedRunningTime="2026-01-27 21:54:36.693490086 +0000 UTC m=+429.109511785" watchObservedRunningTime="2026-01-27 21:54:36.704026871 +0000 UTC m=+429.120048610" Jan 27 21:54:37 crc kubenswrapper[4803]: I0127 21:54:37.629995 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd866823-92fe-4aef-abe7-d7ecc8da30f7","Type":"ContainerStarted","Data":"73b50bd50e602dd8c8c65237a792fa4c492649c2ae5fb6d6bf0103f0175a4e6c"} Jan 27 21:54:37 crc kubenswrapper[4803]: I0127 21:54:37.630344 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd866823-92fe-4aef-abe7-d7ecc8da30f7","Type":"ContainerStarted","Data":"5252176d3aea4f4322bb25c849570d73d3a5521f4402c601606dc71849b3e8e2"} Jan 27 21:54:37 crc kubenswrapper[4803]: I0127 21:54:37.630361 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd866823-92fe-4aef-abe7-d7ecc8da30f7","Type":"ContainerStarted","Data":"53059194ea4b42d0bef0bb60bed549798e7a5c78085dda67ddcc0e4765bdfd1e"} Jan 27 21:54:37 crc kubenswrapper[4803]: I0127 21:54:37.663838 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.4005403579999998 podStartE2EDuration="8.663814565s" podCreationTimestamp="2026-01-27 21:54:29 +0000 UTC" firstStartedPulling="2026-01-27 21:54:30.627306448 +0000 UTC m=+423.043328147" lastFinishedPulling="2026-01-27 21:54:35.890580655 +0000 UTC m=+428.306602354" observedRunningTime="2026-01-27 21:54:37.659746955 +0000 UTC m=+430.075768664" watchObservedRunningTime="2026-01-27 21:54:37.663814565 +0000 UTC m=+430.079836264" Jan 27 21:54:39 crc kubenswrapper[4803]: I0127 21:54:39.644511 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7e1a6ace-a129-49c9-a417-8e3cff536f8f","Type":"ContainerStarted","Data":"b38a7e1bde06d99eb8a70c9e615c871d61b42fb709378ee424f8e73868221c9c"} Jan 27 21:54:40 crc kubenswrapper[4803]: I0127 21:54:40.541068 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" Jan 27 21:54:40 crc kubenswrapper[4803]: I0127 21:54:40.655522 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7e1a6ace-a129-49c9-a417-8e3cff536f8f","Type":"ContainerStarted","Data":"7107d96bbb3f914be6b71d22f89c1734971bfdf59bdb8fb6c7b79e4fe0018030"} Jan 27 21:54:40 crc kubenswrapper[4803]: I0127 21:54:40.655589 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7e1a6ace-a129-49c9-a417-8e3cff536f8f","Type":"ContainerStarted","Data":"1c6b37b56a4c7dd45a927bd3fb90cf38baeec44ffcaad5ac79497c7d3f352a81"} Jan 27 21:54:40 crc kubenswrapper[4803]: I0127 21:54:40.655606 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7e1a6ace-a129-49c9-a417-8e3cff536f8f","Type":"ContainerStarted","Data":"f2e1fa3b2ff18e50bfab7f04838ab81e238ea33fd4feafa815cc89c8efc74410"} Jan 27 21:54:40 crc kubenswrapper[4803]: I0127 21:54:40.655624 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7e1a6ace-a129-49c9-a417-8e3cff536f8f","Type":"ContainerStarted","Data":"8b734801de2dc7e4ec7b0d6f9b91333fc1387a91bf5edfa85d2e6dfe8fd491d0"} Jan 27 21:54:40 crc kubenswrapper[4803]: I0127 21:54:40.655638 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7e1a6ace-a129-49c9-a417-8e3cff536f8f","Type":"ContainerStarted","Data":"0a40e0999c883219e304f345ccf62e95ebd464073699880653bb218418b050a0"} Jan 27 21:54:40 crc kubenswrapper[4803]: I0127 21:54:40.690642 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.836411093 podStartE2EDuration="6.690626445s" podCreationTimestamp="2026-01-27 21:54:34 +0000 UTC" firstStartedPulling="2026-01-27 21:54:36.616835684 +0000 UTC m=+429.032857393" lastFinishedPulling="2026-01-27 21:54:39.471051046 +0000 UTC m=+431.887072745" observedRunningTime="2026-01-27 21:54:40.681290042 +0000 UTC m=+433.097311771" watchObservedRunningTime="2026-01-27 21:54:40.690626445 +0000 UTC m=+433.106648144" Jan 27 21:54:43 crc kubenswrapper[4803]: I0127 21:54:43.252116 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:43 crc kubenswrapper[4803]: I0127 21:54:43.252540 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:43 crc kubenswrapper[4803]: I0127 21:54:43.261926 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:43 crc kubenswrapper[4803]: I0127 21:54:43.684716 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:54:43 crc kubenswrapper[4803]: I0127 21:54:43.742556 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-s9tzw"] Jan 27 21:54:44 crc kubenswrapper[4803]: I0127 21:54:44.765372 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:54:53 crc kubenswrapper[4803]: I0127 21:54:53.784419 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:54:53 crc kubenswrapper[4803]: I0127 21:54:53.785182 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:55:08 crc kubenswrapper[4803]: I0127 21:55:08.793177 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-s9tzw" podUID="b06a9990-b5a6-4198-b3da-22eb6df6692b" containerName="console" containerID="cri-o://d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134" gracePeriod=15 Jan 27 21:55:08 crc kubenswrapper[4803]: I0127 21:55:08.912048 4803 patch_prober.go:28] interesting pod/console-f9d7485db-s9tzw container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 21:55:08 crc kubenswrapper[4803]: I0127 21:55:08.912508 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-s9tzw" podUID="b06a9990-b5a6-4198-b3da-22eb6df6692b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.771148 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-s9tzw_b06a9990-b5a6-4198-b3da-22eb6df6692b/console/0.log" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.771475 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.870273 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-s9tzw_b06a9990-b5a6-4198-b3da-22eb6df6692b/console/0.log" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.870327 4803 generic.go:334] "Generic (PLEG): container finished" podID="b06a9990-b5a6-4198-b3da-22eb6df6692b" containerID="d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134" exitCode=2 Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.870358 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-s9tzw" event={"ID":"b06a9990-b5a6-4198-b3da-22eb6df6692b","Type":"ContainerDied","Data":"d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134"} Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.870382 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-s9tzw" event={"ID":"b06a9990-b5a6-4198-b3da-22eb6df6692b","Type":"ContainerDied","Data":"8d87e246a5546a1e6a10cf8d381766667ac0e1a83454f068bce1bb32f09dbf1e"} Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.870400 4803 scope.go:117] "RemoveContainer" containerID="d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.870463 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-s9tzw" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.889771 4803 scope.go:117] "RemoveContainer" containerID="d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134" Jan 27 21:55:09 crc kubenswrapper[4803]: E0127 21:55:09.890180 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134\": container with ID starting with d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134 not found: ID does not exist" containerID="d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.890209 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134"} err="failed to get container status \"d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134\": rpc error: code = NotFound desc = could not find container \"d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134\": container with ID starting with d61d8d3f5ef8753e102cf50a3828630edf3761bda6c9375eb430177286a3c134 not found: ID does not exist" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.955764 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg8wz\" (UniqueName: \"kubernetes.io/projected/b06a9990-b5a6-4198-b3da-22eb6df6692b-kube-api-access-wg8wz\") pod \"b06a9990-b5a6-4198-b3da-22eb6df6692b\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.955917 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-service-ca\") pod \"b06a9990-b5a6-4198-b3da-22eb6df6692b\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.955954 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-oauth-config\") pod \"b06a9990-b5a6-4198-b3da-22eb6df6692b\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.956036 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-oauth-serving-cert\") pod \"b06a9990-b5a6-4198-b3da-22eb6df6692b\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.956137 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-trusted-ca-bundle\") pod \"b06a9990-b5a6-4198-b3da-22eb6df6692b\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.956208 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-serving-cert\") pod \"b06a9990-b5a6-4198-b3da-22eb6df6692b\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.956363 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-config\") pod \"b06a9990-b5a6-4198-b3da-22eb6df6692b\" (UID: \"b06a9990-b5a6-4198-b3da-22eb6df6692b\") " Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.957079 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-service-ca" (OuterVolumeSpecName: "service-ca") pod "b06a9990-b5a6-4198-b3da-22eb6df6692b" (UID: "b06a9990-b5a6-4198-b3da-22eb6df6692b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.957105 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-config" (OuterVolumeSpecName: "console-config") pod "b06a9990-b5a6-4198-b3da-22eb6df6692b" (UID: "b06a9990-b5a6-4198-b3da-22eb6df6692b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.957157 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b06a9990-b5a6-4198-b3da-22eb6df6692b" (UID: "b06a9990-b5a6-4198-b3da-22eb6df6692b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.957196 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b06a9990-b5a6-4198-b3da-22eb6df6692b" (UID: "b06a9990-b5a6-4198-b3da-22eb6df6692b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.957457 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.957486 4803 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.957501 4803 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.957512 4803 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b06a9990-b5a6-4198-b3da-22eb6df6692b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.962903 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b06a9990-b5a6-4198-b3da-22eb6df6692b" (UID: "b06a9990-b5a6-4198-b3da-22eb6df6692b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.963659 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b06a9990-b5a6-4198-b3da-22eb6df6692b" (UID: "b06a9990-b5a6-4198-b3da-22eb6df6692b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:55:09 crc kubenswrapper[4803]: I0127 21:55:09.968248 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06a9990-b5a6-4198-b3da-22eb6df6692b-kube-api-access-wg8wz" (OuterVolumeSpecName: "kube-api-access-wg8wz") pod "b06a9990-b5a6-4198-b3da-22eb6df6692b" (UID: "b06a9990-b5a6-4198-b3da-22eb6df6692b"). InnerVolumeSpecName "kube-api-access-wg8wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:55:10 crc kubenswrapper[4803]: I0127 21:55:10.064780 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg8wz\" (UniqueName: \"kubernetes.io/projected/b06a9990-b5a6-4198-b3da-22eb6df6692b-kube-api-access-wg8wz\") on node \"crc\" DevicePath \"\"" Jan 27 21:55:10 crc kubenswrapper[4803]: I0127 21:55:10.064892 4803 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:55:10 crc kubenswrapper[4803]: I0127 21:55:10.064912 4803 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b06a9990-b5a6-4198-b3da-22eb6df6692b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:55:10 crc kubenswrapper[4803]: I0127 21:55:10.200047 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-s9tzw"] Jan 27 21:55:10 crc kubenswrapper[4803]: I0127 21:55:10.202708 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-s9tzw"] Jan 27 21:55:10 crc kubenswrapper[4803]: I0127 21:55:10.314727 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06a9990-b5a6-4198-b3da-22eb6df6692b" path="/var/lib/kubelet/pods/b06a9990-b5a6-4198-b3da-22eb6df6692b/volumes" Jan 27 21:55:13 crc kubenswrapper[4803]: I0127 21:55:13.790452 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:55:13 crc kubenswrapper[4803]: I0127 21:55:13.794636 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 21:55:34 crc kubenswrapper[4803]: I0127 21:55:34.766258 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:55:34 crc kubenswrapper[4803]: I0127 21:55:34.816883 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:55:35 crc kubenswrapper[4803]: I0127 21:55:35.078082 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.740773 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8db9b8f74-wdfx9"] Jan 27 21:56:03 crc kubenswrapper[4803]: E0127 21:56:03.741715 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06a9990-b5a6-4198-b3da-22eb6df6692b" containerName="console" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.741732 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06a9990-b5a6-4198-b3da-22eb6df6692b" containerName="console" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.741902 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06a9990-b5a6-4198-b3da-22eb6df6692b" containerName="console" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.742597 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.792034 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxcg5\" (UniqueName: \"kubernetes.io/projected/80401bf8-2e71-4abf-83e4-346fd998733d-kube-api-access-fxcg5\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.792107 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-trusted-ca-bundle\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.792138 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-serving-cert\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.792177 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-oauth-serving-cert\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.792213 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-oauth-config\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.792242 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-service-ca\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.792270 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-console-config\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.816718 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8db9b8f74-wdfx9"] Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.893881 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxcg5\" (UniqueName: \"kubernetes.io/projected/80401bf8-2e71-4abf-83e4-346fd998733d-kube-api-access-fxcg5\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.893932 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-trusted-ca-bundle\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.893960 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-serving-cert\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.893988 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-oauth-serving-cert\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.894067 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-oauth-config\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.894094 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-service-ca\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.894116 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-console-config\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.895251 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-console-config\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.895267 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-service-ca\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.895297 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-oauth-serving-cert\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.895325 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-trusted-ca-bundle\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.899992 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-oauth-config\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.900073 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-serving-cert\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:03 crc kubenswrapper[4803]: I0127 21:56:03.908973 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxcg5\" (UniqueName: \"kubernetes.io/projected/80401bf8-2e71-4abf-83e4-346fd998733d-kube-api-access-fxcg5\") pod \"console-8db9b8f74-wdfx9\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:04 crc kubenswrapper[4803]: I0127 21:56:04.060604 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:04 crc kubenswrapper[4803]: I0127 21:56:04.284034 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8db9b8f74-wdfx9"] Jan 27 21:56:04 crc kubenswrapper[4803]: W0127 21:56:04.291432 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80401bf8_2e71_4abf_83e4_346fd998733d.slice/crio-4e42357b57431b5ebf41b2482081e3db26b6a9e6bfa558a0c69c12afe8496fb6 WatchSource:0}: Error finding container 4e42357b57431b5ebf41b2482081e3db26b6a9e6bfa558a0c69c12afe8496fb6: Status 404 returned error can't find the container with id 4e42357b57431b5ebf41b2482081e3db26b6a9e6bfa558a0c69c12afe8496fb6 Jan 27 21:56:05 crc kubenswrapper[4803]: I0127 21:56:05.232656 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8db9b8f74-wdfx9" event={"ID":"80401bf8-2e71-4abf-83e4-346fd998733d","Type":"ContainerStarted","Data":"e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4"} Jan 27 21:56:05 crc kubenswrapper[4803]: I0127 21:56:05.233189 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8db9b8f74-wdfx9" event={"ID":"80401bf8-2e71-4abf-83e4-346fd998733d","Type":"ContainerStarted","Data":"4e42357b57431b5ebf41b2482081e3db26b6a9e6bfa558a0c69c12afe8496fb6"} Jan 27 21:56:05 crc kubenswrapper[4803]: I0127 21:56:05.255571 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8db9b8f74-wdfx9" podStartSLOduration=2.25555463 podStartE2EDuration="2.25555463s" podCreationTimestamp="2026-01-27 21:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:56:05.253811052 +0000 UTC m=+517.669832771" watchObservedRunningTime="2026-01-27 21:56:05.25555463 +0000 UTC m=+517.671576329" Jan 27 21:56:14 crc kubenswrapper[4803]: I0127 21:56:14.061068 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:14 crc kubenswrapper[4803]: I0127 21:56:14.062538 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:14 crc kubenswrapper[4803]: I0127 21:56:14.069095 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:14 crc kubenswrapper[4803]: I0127 21:56:14.326032 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 21:56:14 crc kubenswrapper[4803]: I0127 21:56:14.381374 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cbf967b4c-wq5fg"] Jan 27 21:56:16 crc kubenswrapper[4803]: I0127 21:56:16.343899 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:56:16 crc kubenswrapper[4803]: I0127 21:56:16.344399 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:56:39 crc kubenswrapper[4803]: I0127 21:56:39.419251 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7cbf967b4c-wq5fg" podUID="dd997814-9e1e-40b8-9ae5-725aa96ce1ce" containerName="console" containerID="cri-o://367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868" gracePeriod=15 Jan 27 21:56:39 crc kubenswrapper[4803]: I0127 21:56:39.850974 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cbf967b4c-wq5fg_dd997814-9e1e-40b8-9ae5-725aa96ce1ce/console/0.log" Jan 27 21:56:39 crc kubenswrapper[4803]: I0127 21:56:39.851352 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.011446 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-oauth-config\") pod \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.011594 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-trusted-ca-bundle\") pod \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.011698 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th77j\" (UniqueName: \"kubernetes.io/projected/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-kube-api-access-th77j\") pod \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.013003 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-service-ca\") pod \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.013175 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-config\") pod \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.013234 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-serving-cert\") pod \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.013255 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dd997814-9e1e-40b8-9ae5-725aa96ce1ce" (UID: "dd997814-9e1e-40b8-9ae5-725aa96ce1ce"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.013296 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-oauth-serving-cert\") pod \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\" (UID: \"dd997814-9e1e-40b8-9ae5-725aa96ce1ce\") " Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.013808 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-service-ca" (OuterVolumeSpecName: "service-ca") pod "dd997814-9e1e-40b8-9ae5-725aa96ce1ce" (UID: "dd997814-9e1e-40b8-9ae5-725aa96ce1ce"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.013885 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.014315 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-config" (OuterVolumeSpecName: "console-config") pod "dd997814-9e1e-40b8-9ae5-725aa96ce1ce" (UID: "dd997814-9e1e-40b8-9ae5-725aa96ce1ce"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.014907 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dd997814-9e1e-40b8-9ae5-725aa96ce1ce" (UID: "dd997814-9e1e-40b8-9ae5-725aa96ce1ce"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.021323 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dd997814-9e1e-40b8-9ae5-725aa96ce1ce" (UID: "dd997814-9e1e-40b8-9ae5-725aa96ce1ce"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.022525 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dd997814-9e1e-40b8-9ae5-725aa96ce1ce" (UID: "dd997814-9e1e-40b8-9ae5-725aa96ce1ce"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.028512 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-kube-api-access-th77j" (OuterVolumeSpecName: "kube-api-access-th77j") pod "dd997814-9e1e-40b8-9ae5-725aa96ce1ce" (UID: "dd997814-9e1e-40b8-9ae5-725aa96ce1ce"). InnerVolumeSpecName "kube-api-access-th77j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.122185 4803 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.122722 4803 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.122792 4803 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.122824 4803 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.122893 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th77j\" (UniqueName: \"kubernetes.io/projected/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-kube-api-access-th77j\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.122924 4803 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd997814-9e1e-40b8-9ae5-725aa96ce1ce-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.511406 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cbf967b4c-wq5fg_dd997814-9e1e-40b8-9ae5-725aa96ce1ce/console/0.log" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.511457 4803 generic.go:334] "Generic (PLEG): container finished" podID="dd997814-9e1e-40b8-9ae5-725aa96ce1ce" containerID="367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868" exitCode=2 Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.511493 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cbf967b4c-wq5fg" event={"ID":"dd997814-9e1e-40b8-9ae5-725aa96ce1ce","Type":"ContainerDied","Data":"367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868"} Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.511518 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cbf967b4c-wq5fg" event={"ID":"dd997814-9e1e-40b8-9ae5-725aa96ce1ce","Type":"ContainerDied","Data":"b16f8f01560708f4d98521c5332ac8e3e341b04e3b4582e06b98999ec360a3ab"} Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.511533 4803 scope.go:117] "RemoveContainer" containerID="367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.511563 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cbf967b4c-wq5fg" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.541146 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cbf967b4c-wq5fg"] Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.543904 4803 scope.go:117] "RemoveContainer" containerID="367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868" Jan 27 21:56:40 crc kubenswrapper[4803]: E0127 21:56:40.544528 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868\": container with ID starting with 367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868 not found: ID does not exist" containerID="367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.544567 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868"} err="failed to get container status \"367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868\": rpc error: code = NotFound desc = could not find container \"367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868\": container with ID starting with 367068cfda7b7bd11855dd209c84148b702f7ac19daaa70f22373bb21827d868 not found: ID does not exist" Jan 27 21:56:40 crc kubenswrapper[4803]: I0127 21:56:40.547616 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7cbf967b4c-wq5fg"] Jan 27 21:56:42 crc kubenswrapper[4803]: I0127 21:56:42.316555 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd997814-9e1e-40b8-9ae5-725aa96ce1ce" path="/var/lib/kubelet/pods/dd997814-9e1e-40b8-9ae5-725aa96ce1ce/volumes" Jan 27 21:56:46 crc kubenswrapper[4803]: I0127 21:56:46.344262 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:56:46 crc kubenswrapper[4803]: I0127 21:56:46.344777 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:57:16 crc kubenswrapper[4803]: I0127 21:57:16.343794 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:57:16 crc kubenswrapper[4803]: I0127 21:57:16.344744 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:57:16 crc kubenswrapper[4803]: I0127 21:57:16.344918 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 21:57:16 crc kubenswrapper[4803]: I0127 21:57:16.346766 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8efaf7b446df272e0996a17c38530d9da7be7bbc83602d505bce00b2e3d7c50"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:57:16 crc kubenswrapper[4803]: I0127 21:57:16.346929 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://e8efaf7b446df272e0996a17c38530d9da7be7bbc83602d505bce00b2e3d7c50" gracePeriod=600 Jan 27 21:57:16 crc kubenswrapper[4803]: I0127 21:57:16.755515 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="e8efaf7b446df272e0996a17c38530d9da7be7bbc83602d505bce00b2e3d7c50" exitCode=0 Jan 27 21:57:16 crc kubenswrapper[4803]: I0127 21:57:16.755632 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"e8efaf7b446df272e0996a17c38530d9da7be7bbc83602d505bce00b2e3d7c50"} Jan 27 21:57:16 crc kubenswrapper[4803]: I0127 21:57:16.755936 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"95521df131317a8fb1bb4697014746e375ef67b38dfab0db8cdee522c9087edc"} Jan 27 21:57:16 crc kubenswrapper[4803]: I0127 21:57:16.755968 4803 scope.go:117] "RemoveContainer" containerID="eab3307c7662fa4415bdda98a4550f98a4f3e4518c2ba81876e66dccef2535a4" Jan 27 21:59:16 crc kubenswrapper[4803]: I0127 21:59:16.343324 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:59:16 crc kubenswrapper[4803]: I0127 21:59:16.344178 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:59:44 crc kubenswrapper[4803]: I0127 21:59:44.991011 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck"] Jan 27 21:59:44 crc kubenswrapper[4803]: E0127 21:59:44.991799 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd997814-9e1e-40b8-9ae5-725aa96ce1ce" containerName="console" Jan 27 21:59:44 crc kubenswrapper[4803]: I0127 21:59:44.991815 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd997814-9e1e-40b8-9ae5-725aa96ce1ce" containerName="console" Jan 27 21:59:44 crc kubenswrapper[4803]: I0127 21:59:44.992022 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd997814-9e1e-40b8-9ae5-725aa96ce1ce" containerName="console" Jan 27 21:59:44 crc kubenswrapper[4803]: I0127 21:59:44.993080 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:44 crc kubenswrapper[4803]: I0127 21:59:44.994645 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.019692 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck"] Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.021005 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s9cc\" (UniqueName: \"kubernetes.io/projected/3eb17edf-3450-4f70-b33e-864605aa1e6c-kube-api-access-5s9cc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.021061 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.021170 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.122169 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s9cc\" (UniqueName: \"kubernetes.io/projected/3eb17edf-3450-4f70-b33e-864605aa1e6c-kube-api-access-5s9cc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.122222 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.122305 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.122811 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.122935 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.144313 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s9cc\" (UniqueName: \"kubernetes.io/projected/3eb17edf-3450-4f70-b33e-864605aa1e6c-kube-api-access-5s9cc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.323650 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.728011 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck"] Jan 27 21:59:45 crc kubenswrapper[4803]: I0127 21:59:45.766173 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" event={"ID":"3eb17edf-3450-4f70-b33e-864605aa1e6c","Type":"ContainerStarted","Data":"9afa87a9ed540339b02a11c60a69eaa8a8e664214e971bfb73d3e3c65d8d9d32"} Jan 27 21:59:46 crc kubenswrapper[4803]: E0127 21:59:46.301274 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eb17edf_3450_4f70_b33e_864605aa1e6c.slice/crio-f09160b10b9130c43365a0e1b744047a9ef9a21206865819aebf7c89b0d6446a.scope\": RecentStats: unable to find data in memory cache]" Jan 27 21:59:46 crc kubenswrapper[4803]: I0127 21:59:46.344474 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:59:46 crc kubenswrapper[4803]: I0127 21:59:46.344564 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:59:46 crc kubenswrapper[4803]: I0127 21:59:46.774228 4803 generic.go:334] "Generic (PLEG): container finished" podID="3eb17edf-3450-4f70-b33e-864605aa1e6c" containerID="f09160b10b9130c43365a0e1b744047a9ef9a21206865819aebf7c89b0d6446a" exitCode=0 Jan 27 21:59:46 crc kubenswrapper[4803]: I0127 21:59:46.774297 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" event={"ID":"3eb17edf-3450-4f70-b33e-864605aa1e6c","Type":"ContainerDied","Data":"f09160b10b9130c43365a0e1b744047a9ef9a21206865819aebf7c89b0d6446a"} Jan 27 21:59:46 crc kubenswrapper[4803]: I0127 21:59:46.776225 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:59:48 crc kubenswrapper[4803]: I0127 21:59:48.786819 4803 generic.go:334] "Generic (PLEG): container finished" podID="3eb17edf-3450-4f70-b33e-864605aa1e6c" containerID="b1de8735c2fd551c6158fccaf87a9a3cae16ca1c5cb35a9a52a213aa3c653109" exitCode=0 Jan 27 21:59:48 crc kubenswrapper[4803]: I0127 21:59:48.786901 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" event={"ID":"3eb17edf-3450-4f70-b33e-864605aa1e6c","Type":"ContainerDied","Data":"b1de8735c2fd551c6158fccaf87a9a3cae16ca1c5cb35a9a52a213aa3c653109"} Jan 27 21:59:49 crc kubenswrapper[4803]: I0127 21:59:49.796413 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" event={"ID":"3eb17edf-3450-4f70-b33e-864605aa1e6c","Type":"ContainerStarted","Data":"6e862fb09219fe718ff7f6daa381505821b4f87e8c920a5771fff7e4c362a9c7"} Jan 27 21:59:49 crc kubenswrapper[4803]: I0127 21:59:49.816084 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" podStartSLOduration=4.749302077 podStartE2EDuration="5.816066962s" podCreationTimestamp="2026-01-27 21:59:44 +0000 UTC" firstStartedPulling="2026-01-27 21:59:46.775990151 +0000 UTC m=+739.192011850" lastFinishedPulling="2026-01-27 21:59:47.842755036 +0000 UTC m=+740.258776735" observedRunningTime="2026-01-27 21:59:49.812276649 +0000 UTC m=+742.228298378" watchObservedRunningTime="2026-01-27 21:59:49.816066962 +0000 UTC m=+742.232088661" Jan 27 21:59:50 crc kubenswrapper[4803]: I0127 21:59:50.804813 4803 generic.go:334] "Generic (PLEG): container finished" podID="3eb17edf-3450-4f70-b33e-864605aa1e6c" containerID="6e862fb09219fe718ff7f6daa381505821b4f87e8c920a5771fff7e4c362a9c7" exitCode=0 Jan 27 21:59:50 crc kubenswrapper[4803]: I0127 21:59:50.804905 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" event={"ID":"3eb17edf-3450-4f70-b33e-864605aa1e6c","Type":"ContainerDied","Data":"6e862fb09219fe718ff7f6daa381505821b4f87e8c920a5771fff7e4c362a9c7"} Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.083092 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.224477 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-bundle\") pod \"3eb17edf-3450-4f70-b33e-864605aa1e6c\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.224543 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-util\") pod \"3eb17edf-3450-4f70-b33e-864605aa1e6c\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.224566 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s9cc\" (UniqueName: \"kubernetes.io/projected/3eb17edf-3450-4f70-b33e-864605aa1e6c-kube-api-access-5s9cc\") pod \"3eb17edf-3450-4f70-b33e-864605aa1e6c\" (UID: \"3eb17edf-3450-4f70-b33e-864605aa1e6c\") " Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.226508 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-bundle" (OuterVolumeSpecName: "bundle") pod "3eb17edf-3450-4f70-b33e-864605aa1e6c" (UID: "3eb17edf-3450-4f70-b33e-864605aa1e6c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.234524 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-util" (OuterVolumeSpecName: "util") pod "3eb17edf-3450-4f70-b33e-864605aa1e6c" (UID: "3eb17edf-3450-4f70-b33e-864605aa1e6c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.235777 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb17edf-3450-4f70-b33e-864605aa1e6c-kube-api-access-5s9cc" (OuterVolumeSpecName: "kube-api-access-5s9cc") pod "3eb17edf-3450-4f70-b33e-864605aa1e6c" (UID: "3eb17edf-3450-4f70-b33e-864605aa1e6c"). InnerVolumeSpecName "kube-api-access-5s9cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.325747 4803 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.325777 4803 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb17edf-3450-4f70-b33e-864605aa1e6c-util\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.325788 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s9cc\" (UniqueName: \"kubernetes.io/projected/3eb17edf-3450-4f70-b33e-864605aa1e6c-kube-api-access-5s9cc\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.823225 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" event={"ID":"3eb17edf-3450-4f70-b33e-864605aa1e6c","Type":"ContainerDied","Data":"9afa87a9ed540339b02a11c60a69eaa8a8e664214e971bfb73d3e3c65d8d9d32"} Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.823270 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9afa87a9ed540339b02a11c60a69eaa8a8e664214e971bfb73d3e3c65d8d9d32" Jan 27 21:59:52 crc kubenswrapper[4803]: I0127 21:59:52.823271 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck" Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.056863 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6dhj4"] Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.057655 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="nbdb" containerID="cri-o://cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1" gracePeriod=30 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.057719 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d" gracePeriod=30 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.057618 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovn-controller" containerID="cri-o://f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386" gracePeriod=30 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.057755 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovn-acl-logging" containerID="cri-o://aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1" gracePeriod=30 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.057727 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="northd" containerID="cri-o://14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904" gracePeriod=30 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.057768 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="sbdb" containerID="cri-o://f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495" gracePeriod=30 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.057763 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="kube-rbac-proxy-node" containerID="cri-o://0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0" gracePeriod=30 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.154499 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" containerID="cri-o://95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7" gracePeriod=30 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.846449 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovnkube-controller/3.log" Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.848694 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovn-acl-logging/0.log" Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849180 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovn-controller/0.log" Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849614 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7" exitCode=0 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849636 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495" exitCode=0 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849645 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1" exitCode=0 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849655 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904" exitCode=0 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849664 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1" exitCode=143 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849671 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386" exitCode=143 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849711 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7"} Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849734 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495"} Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849745 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1"} Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849754 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904"} Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849762 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1"} Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849771 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386"} Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.849786 4803 scope.go:117] "RemoveContainer" containerID="0125572d11adf9e37e8ad7f9829f4e35266899c012f237ba2df4f566b650104f" Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.852180 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qnns7_2a912f01-6d26-421f-8b21-fb2f98d5c2e6/kube-multus/2.log" Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.852518 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qnns7_2a912f01-6d26-421f-8b21-fb2f98d5c2e6/kube-multus/1.log" Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.852549 4803 generic.go:334] "Generic (PLEG): container finished" podID="2a912f01-6d26-421f-8b21-fb2f98d5c2e6" containerID="a4168203fe1e337403d6d45baececb9bddd8657d937ea27698b6e75c27ff002a" exitCode=2 Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.852579 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qnns7" event={"ID":"2a912f01-6d26-421f-8b21-fb2f98d5c2e6","Type":"ContainerDied","Data":"a4168203fe1e337403d6d45baececb9bddd8657d937ea27698b6e75c27ff002a"} Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.853111 4803 scope.go:117] "RemoveContainer" containerID="a4168203fe1e337403d6d45baececb9bddd8657d937ea27698b6e75c27ff002a" Jan 27 21:59:56 crc kubenswrapper[4803]: I0127 21:59:56.881340 4803 scope.go:117] "RemoveContainer" containerID="59df9f103f769b95337ed2b17d17dbf264eed9dca7cc1a0ef5f455043d209b66" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.300554 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovn-acl-logging/0.log" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.300959 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovn-controller/0.log" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.301295 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.396672 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-node-log\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397009 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-slash\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.396813 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-node-log" (OuterVolumeSpecName: "node-log") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397038 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xnhr\" (UniqueName: \"kubernetes.io/projected/db438ee2-57c2-4cbf-9d4b-96f8587647d6-kube-api-access-4xnhr\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397071 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-netns\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397076 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-slash" (OuterVolumeSpecName: "host-slash") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397092 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovn-node-metrics-cert\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397117 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397147 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-systemd-units\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397162 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-netd\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397189 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-systemd\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397204 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-etc-openvswitch\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397236 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-ovn-kubernetes\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397254 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-bin\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397274 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-log-socket\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397317 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-openvswitch\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397338 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-env-overrides\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397352 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-kubelet\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397369 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-ovn\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397399 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-var-lib-openvswitch\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397418 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-script-lib\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397436 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-config\") pod \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\" (UID: \"db438ee2-57c2-4cbf-9d4b-96f8587647d6\") " Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397711 4803 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397726 4803 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.397978 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.398023 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.398431 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.398455 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.398488 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.398898 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.399373 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.400515 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.400577 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.400598 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.400900 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.400945 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.400967 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.400988 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-log-socket" (OuterVolumeSpecName: "log-socket") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.401008 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.413243 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db438ee2-57c2-4cbf-9d4b-96f8587647d6-kube-api-access-4xnhr" (OuterVolumeSpecName: "kube-api-access-4xnhr") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "kube-api-access-4xnhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.421597 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.423004 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "db438ee2-57c2-4cbf-9d4b-96f8587647d6" (UID: "db438ee2-57c2-4cbf-9d4b-96f8587647d6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.456972 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-24rf7"] Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457210 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457227 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457235 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovn-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457240 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovn-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457248 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457254 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457266 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457271 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457280 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="sbdb" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457286 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="sbdb" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457294 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="kubecfg-setup" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457300 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="kubecfg-setup" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457311 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="northd" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457317 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="northd" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457325 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb17edf-3450-4f70-b33e-864605aa1e6c" containerName="pull" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457330 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb17edf-3450-4f70-b33e-864605aa1e6c" containerName="pull" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457342 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="nbdb" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457347 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="nbdb" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457355 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457360 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457366 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovn-acl-logging" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457372 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovn-acl-logging" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457380 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb17edf-3450-4f70-b33e-864605aa1e6c" containerName="extract" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457386 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb17edf-3450-4f70-b33e-864605aa1e6c" containerName="extract" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457395 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="kube-rbac-proxy-node" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457400 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="kube-rbac-proxy-node" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457411 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb17edf-3450-4f70-b33e-864605aa1e6c" containerName="util" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457416 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb17edf-3450-4f70-b33e-864605aa1e6c" containerName="util" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457516 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457525 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="northd" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457533 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovn-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457540 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovn-acl-logging" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457550 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="kube-rbac-proxy-node" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457557 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb17edf-3450-4f70-b33e-864605aa1e6c" containerName="extract" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457568 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="nbdb" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457577 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457584 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457591 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457599 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="sbdb" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457607 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457716 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457723 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: E0127 21:59:57.457733 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457739 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.457829 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerName="ovnkube-controller" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.459564 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.498935 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-slash\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.498977 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzklc\" (UniqueName: \"kubernetes.io/projected/c87b2272-16b5-4b53-9a41-f53e22f176b7-kube-api-access-tzklc\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.498997 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-node-log\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499015 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c87b2272-16b5-4b53-9a41-f53e22f176b7-ovnkube-script-lib\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499048 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c87b2272-16b5-4b53-9a41-f53e22f176b7-ovnkube-config\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499064 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-systemd-units\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499084 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-log-socket\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499098 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-cni-bin\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499118 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-kubelet\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499139 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-run-openvswitch\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499175 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c87b2272-16b5-4b53-9a41-f53e22f176b7-env-overrides\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499197 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-etc-openvswitch\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499269 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c87b2272-16b5-4b53-9a41-f53e22f176b7-ovn-node-metrics-cert\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499317 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-run-netns\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499343 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-var-lib-openvswitch\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499401 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-cni-netd\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499445 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499471 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-run-ovn-kubernetes\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499501 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-run-ovn\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499518 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-run-systemd\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499592 4803 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499604 4803 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499613 4803 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499626 4803 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499634 4803 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499643 4803 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499653 4803 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499662 4803 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499673 4803 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499681 4803 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499691 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xnhr\" (UniqueName: \"kubernetes.io/projected/db438ee2-57c2-4cbf-9d4b-96f8587647d6-kube-api-access-4xnhr\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499701 4803 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499709 4803 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/db438ee2-57c2-4cbf-9d4b-96f8587647d6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499718 4803 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499727 4803 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499736 4803 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499745 4803 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.499753 4803 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/db438ee2-57c2-4cbf-9d4b-96f8587647d6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.600895 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-cni-netd\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.600964 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.600987 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-run-ovn-kubernetes\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601023 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-run-ovn\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601040 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-run-systemd\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601064 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-slash\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601080 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzklc\" (UniqueName: \"kubernetes.io/projected/c87b2272-16b5-4b53-9a41-f53e22f176b7-kube-api-access-tzklc\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601112 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-node-log\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601129 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c87b2272-16b5-4b53-9a41-f53e22f176b7-ovnkube-script-lib\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601149 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c87b2272-16b5-4b53-9a41-f53e22f176b7-ovnkube-config\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601165 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-systemd-units\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601197 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-log-socket\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601212 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-cni-bin\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601230 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-kubelet\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601247 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-run-openvswitch\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601295 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c87b2272-16b5-4b53-9a41-f53e22f176b7-env-overrides\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601314 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-etc-openvswitch\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601331 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c87b2272-16b5-4b53-9a41-f53e22f176b7-ovn-node-metrics-cert\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601346 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-run-netns\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601361 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-var-lib-openvswitch\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601470 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-var-lib-openvswitch\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601511 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-cni-netd\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601531 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601552 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-run-ovn-kubernetes\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601572 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-run-ovn\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601591 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-run-systemd\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601609 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-slash\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.601932 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-node-log\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.602263 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-kubelet\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.602343 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-etc-openvswitch\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.602361 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-run-openvswitch\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.602417 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-run-netns\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.602463 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-systemd-units\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.602467 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-host-cni-bin\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.602592 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c87b2272-16b5-4b53-9a41-f53e22f176b7-log-socket\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.602669 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c87b2272-16b5-4b53-9a41-f53e22f176b7-ovnkube-script-lib\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.602890 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c87b2272-16b5-4b53-9a41-f53e22f176b7-ovnkube-config\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.602916 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c87b2272-16b5-4b53-9a41-f53e22f176b7-env-overrides\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.605934 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c87b2272-16b5-4b53-9a41-f53e22f176b7-ovn-node-metrics-cert\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.628674 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzklc\" (UniqueName: \"kubernetes.io/projected/c87b2272-16b5-4b53-9a41-f53e22f176b7-kube-api-access-tzklc\") pod \"ovnkube-node-24rf7\" (UID: \"c87b2272-16b5-4b53-9a41-f53e22f176b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.771902 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.864867 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qnns7_2a912f01-6d26-421f-8b21-fb2f98d5c2e6/kube-multus/2.log" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.864967 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qnns7" event={"ID":"2a912f01-6d26-421f-8b21-fb2f98d5c2e6","Type":"ContainerStarted","Data":"98c1d435db0a7d918283dea5f1517ec031cfafc914a595321ed393ae607a966f"} Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.870428 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovn-acl-logging/0.log" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.870918 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6dhj4_db438ee2-57c2-4cbf-9d4b-96f8587647d6/ovn-controller/0.log" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.872350 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d" exitCode=0 Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.872380 4803 generic.go:334] "Generic (PLEG): container finished" podID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" containerID="0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0" exitCode=0 Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.872433 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d"} Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.872461 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0"} Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.872475 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" event={"ID":"db438ee2-57c2-4cbf-9d4b-96f8587647d6","Type":"ContainerDied","Data":"854e03ada2428f3caddbacc5284f818977e9a30ba55be33a226a6a94747b0196"} Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.872495 4803 scope.go:117] "RemoveContainer" containerID="95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.872634 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6dhj4" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.881583 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" event={"ID":"c87b2272-16b5-4b53-9a41-f53e22f176b7","Type":"ContainerStarted","Data":"1aff339c4eb687bc4c719cfa781fdf6946ec1a3b69f3f279ad120ef12a7d02ba"} Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.901026 4803 scope.go:117] "RemoveContainer" containerID="f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.925440 4803 scope.go:117] "RemoveContainer" containerID="cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.951657 4803 scope.go:117] "RemoveContainer" containerID="14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904" Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.985588 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6dhj4"] Jan 27 21:59:57 crc kubenswrapper[4803]: I0127 21:59:57.991887 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6dhj4"] Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.018505 4803 scope.go:117] "RemoveContainer" containerID="d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.031366 4803 scope.go:117] "RemoveContainer" containerID="0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.052006 4803 scope.go:117] "RemoveContainer" containerID="aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.066060 4803 scope.go:117] "RemoveContainer" containerID="f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.087381 4803 scope.go:117] "RemoveContainer" containerID="f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.105297 4803 scope.go:117] "RemoveContainer" containerID="95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7" Jan 27 21:59:58 crc kubenswrapper[4803]: E0127 21:59:58.106256 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7\": container with ID starting with 95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7 not found: ID does not exist" containerID="95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.106318 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7"} err="failed to get container status \"95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7\": rpc error: code = NotFound desc = could not find container \"95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7\": container with ID starting with 95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.106344 4803 scope.go:117] "RemoveContainer" containerID="f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495" Jan 27 21:59:58 crc kubenswrapper[4803]: E0127 21:59:58.106807 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\": container with ID starting with f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495 not found: ID does not exist" containerID="f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.106832 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495"} err="failed to get container status \"f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\": rpc error: code = NotFound desc = could not find container \"f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\": container with ID starting with f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.106862 4803 scope.go:117] "RemoveContainer" containerID="cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1" Jan 27 21:59:58 crc kubenswrapper[4803]: E0127 21:59:58.107682 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\": container with ID starting with cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1 not found: ID does not exist" containerID="cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.107705 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1"} err="failed to get container status \"cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\": rpc error: code = NotFound desc = could not find container \"cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\": container with ID starting with cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.107727 4803 scope.go:117] "RemoveContainer" containerID="14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904" Jan 27 21:59:58 crc kubenswrapper[4803]: E0127 21:59:58.107955 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\": container with ID starting with 14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904 not found: ID does not exist" containerID="14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.107977 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904"} err="failed to get container status \"14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\": rpc error: code = NotFound desc = could not find container \"14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\": container with ID starting with 14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.107992 4803 scope.go:117] "RemoveContainer" containerID="d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d" Jan 27 21:59:58 crc kubenswrapper[4803]: E0127 21:59:58.108303 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\": container with ID starting with d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d not found: ID does not exist" containerID="d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.108328 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d"} err="failed to get container status \"d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\": rpc error: code = NotFound desc = could not find container \"d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\": container with ID starting with d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.108342 4803 scope.go:117] "RemoveContainer" containerID="0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0" Jan 27 21:59:58 crc kubenswrapper[4803]: E0127 21:59:58.108548 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\": container with ID starting with 0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0 not found: ID does not exist" containerID="0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.108578 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0"} err="failed to get container status \"0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\": rpc error: code = NotFound desc = could not find container \"0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\": container with ID starting with 0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.108590 4803 scope.go:117] "RemoveContainer" containerID="aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1" Jan 27 21:59:58 crc kubenswrapper[4803]: E0127 21:59:58.109070 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\": container with ID starting with aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1 not found: ID does not exist" containerID="aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.109090 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1"} err="failed to get container status \"aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\": rpc error: code = NotFound desc = could not find container \"aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\": container with ID starting with aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.109103 4803 scope.go:117] "RemoveContainer" containerID="f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386" Jan 27 21:59:58 crc kubenswrapper[4803]: E0127 21:59:58.109320 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\": container with ID starting with f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386 not found: ID does not exist" containerID="f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.109339 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386"} err="failed to get container status \"f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\": rpc error: code = NotFound desc = could not find container \"f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\": container with ID starting with f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.109351 4803 scope.go:117] "RemoveContainer" containerID="f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade" Jan 27 21:59:58 crc kubenswrapper[4803]: E0127 21:59:58.109641 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\": container with ID starting with f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade not found: ID does not exist" containerID="f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.109690 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade"} err="failed to get container status \"f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\": rpc error: code = NotFound desc = could not find container \"f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\": container with ID starting with f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.109718 4803 scope.go:117] "RemoveContainer" containerID="95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.110087 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7"} err="failed to get container status \"95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7\": rpc error: code = NotFound desc = could not find container \"95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7\": container with ID starting with 95677dbb3c07983d658f77237194f1f75b0d7ebe4487fadfbfa582d43961bde7 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.110107 4803 scope.go:117] "RemoveContainer" containerID="f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.110362 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495"} err="failed to get container status \"f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\": rpc error: code = NotFound desc = could not find container \"f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495\": container with ID starting with f8468771fd175b1fbb08fcfbcb4849df31a68598f3f83f449acb269493ad0495 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.110397 4803 scope.go:117] "RemoveContainer" containerID="cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.110703 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1"} err="failed to get container status \"cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\": rpc error: code = NotFound desc = could not find container \"cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1\": container with ID starting with cf9767ce288a39b4a5cbeb1bebd9e8519fcffa283ada9cefe552f0438c4a42f1 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.110724 4803 scope.go:117] "RemoveContainer" containerID="14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.110943 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904"} err="failed to get container status \"14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\": rpc error: code = NotFound desc = could not find container \"14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904\": container with ID starting with 14c80049b37a21fcca624cc2bacfc26e6022a2d4a2ae3063303710ffc2cd9904 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.110963 4803 scope.go:117] "RemoveContainer" containerID="d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.111222 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d"} err="failed to get container status \"d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\": rpc error: code = NotFound desc = could not find container \"d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d\": container with ID starting with d56562c104a66666d6e140a2cd17d7b3c0c0d6f3730ed5c4c1e09763f4c1e72d not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.111244 4803 scope.go:117] "RemoveContainer" containerID="0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.111548 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0"} err="failed to get container status \"0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\": rpc error: code = NotFound desc = could not find container \"0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0\": container with ID starting with 0582dfccfe2e787dbbe0d0298803e39fe1b0c7693ea8f5fa7aa70cee4ba599c0 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.111570 4803 scope.go:117] "RemoveContainer" containerID="aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.111874 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1"} err="failed to get container status \"aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\": rpc error: code = NotFound desc = could not find container \"aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1\": container with ID starting with aae578a8fdfa91d53acc5fd9655172f200c8f04db4902404e805ca949910c5a1 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.111905 4803 scope.go:117] "RemoveContainer" containerID="f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.112216 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386"} err="failed to get container status \"f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\": rpc error: code = NotFound desc = could not find container \"f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386\": container with ID starting with f64d8c0c02de9a74f659db1f536b53dec7027e6f3c8166855c95956b0f002386 not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.112238 4803 scope.go:117] "RemoveContainer" containerID="f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.112520 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade"} err="failed to get container status \"f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\": rpc error: code = NotFound desc = could not find container \"f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade\": container with ID starting with f5161aa42af648ab1d0f4c7480cb8ae5858df5e886d051d2be05f7c66e443ade not found: ID does not exist" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.314410 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db438ee2-57c2-4cbf-9d4b-96f8587647d6" path="/var/lib/kubelet/pods/db438ee2-57c2-4cbf-9d4b-96f8587647d6/volumes" Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.887896 4803 generic.go:334] "Generic (PLEG): container finished" podID="c87b2272-16b5-4b53-9a41-f53e22f176b7" containerID="69586abe61dfd0e290f3e61400dd9b4fb9e702fc9fcecbea7fb0f647625e457b" exitCode=0 Jan 27 21:59:58 crc kubenswrapper[4803]: I0127 21:59:58.887986 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" event={"ID":"c87b2272-16b5-4b53-9a41-f53e22f176b7","Type":"ContainerDied","Data":"69586abe61dfd0e290f3e61400dd9b4fb9e702fc9fcecbea7fb0f647625e457b"} Jan 27 21:59:59 crc kubenswrapper[4803]: I0127 21:59:59.895798 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" event={"ID":"c87b2272-16b5-4b53-9a41-f53e22f176b7","Type":"ContainerStarted","Data":"a724475a1c6bf21f9b47038a20552ed8a4c65034544457bffb8081da1ca27f79"} Jan 27 21:59:59 crc kubenswrapper[4803]: I0127 21:59:59.896317 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" event={"ID":"c87b2272-16b5-4b53-9a41-f53e22f176b7","Type":"ContainerStarted","Data":"4fe9d4d10b27c3e2107d978c88b0ae01a615e93db24167c1ec04d590a7f224f4"} Jan 27 21:59:59 crc kubenswrapper[4803]: I0127 21:59:59.896328 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" event={"ID":"c87b2272-16b5-4b53-9a41-f53e22f176b7","Type":"ContainerStarted","Data":"ac00a209bc9b77eb3780a2606f223c1521406f1335df016e6220012c1abd337d"} Jan 27 21:59:59 crc kubenswrapper[4803]: I0127 21:59:59.896337 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" event={"ID":"c87b2272-16b5-4b53-9a41-f53e22f176b7","Type":"ContainerStarted","Data":"3e93bf2a8e501b1be371f047be27fa1fc14ff89f60a5b57f1f49d5a0ebfb69f6"} Jan 27 21:59:59 crc kubenswrapper[4803]: I0127 21:59:59.896345 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" event={"ID":"c87b2272-16b5-4b53-9a41-f53e22f176b7","Type":"ContainerStarted","Data":"3cbc27f62e5f556d1c47a9c07e85ec2995efbf3fff3d1722e0b491669c914d53"} Jan 27 21:59:59 crc kubenswrapper[4803]: I0127 21:59:59.896352 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" event={"ID":"c87b2272-16b5-4b53-9a41-f53e22f176b7","Type":"ContainerStarted","Data":"f7884f8a617cefa0aa9ee4677e86ce66e054afa289911e7bab5b9f5977b8ec51"} Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.159313 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm"] Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.160255 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.165503 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.165613 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.235165 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9ce2383-1f63-4adf-964b-2e6769ac9957-config-volume\") pod \"collect-profiles-29492520-bkldm\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.235240 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9ce2383-1f63-4adf-964b-2e6769ac9957-secret-volume\") pod \"collect-profiles-29492520-bkldm\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.235281 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-499rm\" (UniqueName: \"kubernetes.io/projected/c9ce2383-1f63-4adf-964b-2e6769ac9957-kube-api-access-499rm\") pod \"collect-profiles-29492520-bkldm\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.337084 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9ce2383-1f63-4adf-964b-2e6769ac9957-secret-volume\") pod \"collect-profiles-29492520-bkldm\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.337231 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-499rm\" (UniqueName: \"kubernetes.io/projected/c9ce2383-1f63-4adf-964b-2e6769ac9957-kube-api-access-499rm\") pod \"collect-profiles-29492520-bkldm\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.337367 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9ce2383-1f63-4adf-964b-2e6769ac9957-config-volume\") pod \"collect-profiles-29492520-bkldm\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.338213 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9ce2383-1f63-4adf-964b-2e6769ac9957-config-volume\") pod \"collect-profiles-29492520-bkldm\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.344269 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9ce2383-1f63-4adf-964b-2e6769ac9957-secret-volume\") pod \"collect-profiles-29492520-bkldm\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.352867 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-499rm\" (UniqueName: \"kubernetes.io/projected/c9ce2383-1f63-4adf-964b-2e6769ac9957-kube-api-access-499rm\") pod \"collect-profiles-29492520-bkldm\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: I0127 22:00:00.474535 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: E0127 22:00:00.503578 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager_c9ce2383-1f63-4adf-964b-2e6769ac9957_0(bff0202dcd5e1bad51d987d14eba31c067258e80f1e9a507ba550d51dec3b690): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:00 crc kubenswrapper[4803]: E0127 22:00:00.503934 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager_c9ce2383-1f63-4adf-964b-2e6769ac9957_0(bff0202dcd5e1bad51d987d14eba31c067258e80f1e9a507ba550d51dec3b690): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: E0127 22:00:00.503956 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager_c9ce2383-1f63-4adf-964b-2e6769ac9957_0(bff0202dcd5e1bad51d987d14eba31c067258e80f1e9a507ba550d51dec3b690): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:00 crc kubenswrapper[4803]: E0127 22:00:00.504002 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager(c9ce2383-1f63-4adf-964b-2e6769ac9957)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager(c9ce2383-1f63-4adf-964b-2e6769ac9957)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager_c9ce2383-1f63-4adf-964b-2e6769ac9957_0(bff0202dcd5e1bad51d987d14eba31c067258e80f1e9a507ba550d51dec3b690): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" podUID="c9ce2383-1f63-4adf-964b-2e6769ac9957" Jan 27 22:00:01 crc kubenswrapper[4803]: I0127 22:00:01.577488 4803 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 22:00:01 crc kubenswrapper[4803]: I0127 22:00:01.863058 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg"] Jan 27 22:00:01 crc kubenswrapper[4803]: I0127 22:00:01.863765 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:01 crc kubenswrapper[4803]: I0127 22:00:01.866534 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-rbh2h" Jan 27 22:00:01 crc kubenswrapper[4803]: I0127 22:00:01.866679 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 27 22:00:01 crc kubenswrapper[4803]: I0127 22:00:01.870092 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 27 22:00:01 crc kubenswrapper[4803]: I0127 22:00:01.957723 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8nwg\" (UniqueName: \"kubernetes.io/projected/67bbe061-3ab2-43cf-9579-900c0ff65da9-kube-api-access-t8nwg\") pod \"obo-prometheus-operator-68bc856cb9-qtnmg\" (UID: \"67bbe061-3ab2-43cf-9579-900c0ff65da9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:01 crc kubenswrapper[4803]: I0127 22:00:01.993124 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv"] Jan 27 22:00:01 crc kubenswrapper[4803]: I0127 22:00:01.994105 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:01 crc kubenswrapper[4803]: I0127 22:00:01.995948 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 27 22:00:01 crc kubenswrapper[4803]: I0127 22:00:01.996343 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-q957p" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.008635 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7"] Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.009424 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.059516 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48ffb065-6bf7-4b9c-981e-f834ead82767-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7\" (UID: \"48ffb065-6bf7-4b9c-981e-f834ead82767\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.059586 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48ffb065-6bf7-4b9c-981e-f834ead82767-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7\" (UID: \"48ffb065-6bf7-4b9c-981e-f834ead82767\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.059611 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eed68546-4e6f-4551-95ab-7e870b098179-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-v75wv\" (UID: \"eed68546-4e6f-4551-95ab-7e870b098179\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.059688 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eed68546-4e6f-4551-95ab-7e870b098179-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-v75wv\" (UID: \"eed68546-4e6f-4551-95ab-7e870b098179\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.059718 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8nwg\" (UniqueName: \"kubernetes.io/projected/67bbe061-3ab2-43cf-9579-900c0ff65da9-kube-api-access-t8nwg\") pod \"obo-prometheus-operator-68bc856cb9-qtnmg\" (UID: \"67bbe061-3ab2-43cf-9579-900c0ff65da9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.078731 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8nwg\" (UniqueName: \"kubernetes.io/projected/67bbe061-3ab2-43cf-9579-900c0ff65da9-kube-api-access-t8nwg\") pod \"obo-prometheus-operator-68bc856cb9-qtnmg\" (UID: \"67bbe061-3ab2-43cf-9579-900c0ff65da9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.161089 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48ffb065-6bf7-4b9c-981e-f834ead82767-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7\" (UID: \"48ffb065-6bf7-4b9c-981e-f834ead82767\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.161155 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48ffb065-6bf7-4b9c-981e-f834ead82767-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7\" (UID: \"48ffb065-6bf7-4b9c-981e-f834ead82767\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.161175 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eed68546-4e6f-4551-95ab-7e870b098179-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-v75wv\" (UID: \"eed68546-4e6f-4551-95ab-7e870b098179\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.161243 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eed68546-4e6f-4551-95ab-7e870b098179-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-v75wv\" (UID: \"eed68546-4e6f-4551-95ab-7e870b098179\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.164166 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eed68546-4e6f-4551-95ab-7e870b098179-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-v75wv\" (UID: \"eed68546-4e6f-4551-95ab-7e870b098179\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.164288 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eed68546-4e6f-4551-95ab-7e870b098179-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-v75wv\" (UID: \"eed68546-4e6f-4551-95ab-7e870b098179\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.164781 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48ffb065-6bf7-4b9c-981e-f834ead82767-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7\" (UID: \"48ffb065-6bf7-4b9c-981e-f834ead82767\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.165266 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48ffb065-6bf7-4b9c-981e-f834ead82767-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7\" (UID: \"48ffb065-6bf7-4b9c-981e-f834ead82767\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.183934 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.188475 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-skn2q"] Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.189188 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.193156 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-2rpcv" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.193230 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.209594 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators_67bbe061-3ab2-43cf-9579-900c0ff65da9_0(b6ee2ba918ff9c7cb16b0d00f733f4afc078dea418d74196535dfe4a9daa2b23): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.209649 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators_67bbe061-3ab2-43cf-9579-900c0ff65da9_0(b6ee2ba918ff9c7cb16b0d00f733f4afc078dea418d74196535dfe4a9daa2b23): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.209673 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators_67bbe061-3ab2-43cf-9579-900c0ff65da9_0(b6ee2ba918ff9c7cb16b0d00f733f4afc078dea418d74196535dfe4a9daa2b23): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.209715 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators(67bbe061-3ab2-43cf-9579-900c0ff65da9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators(67bbe061-3ab2-43cf-9579-900c0ff65da9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators_67bbe061-3ab2-43cf-9579-900c0ff65da9_0(b6ee2ba918ff9c7cb16b0d00f733f4afc078dea418d74196535dfe4a9daa2b23): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" podUID="67bbe061-3ab2-43cf-9579-900c0ff65da9" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.262931 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25s28\" (UniqueName: \"kubernetes.io/projected/69126409-4642-4d42-855d-e7325b3de7c5-kube-api-access-25s28\") pod \"observability-operator-59bdc8b94-skn2q\" (UID: \"69126409-4642-4d42-855d-e7325b3de7c5\") " pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.263006 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/69126409-4642-4d42-855d-e7325b3de7c5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-skn2q\" (UID: \"69126409-4642-4d42-855d-e7325b3de7c5\") " pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.311342 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.327486 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.345257 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators_eed68546-4e6f-4551-95ab-7e870b098179_0(5adf9f204db1728f66ba85d8162d11e6f135fa604ba1bc1dbe225152f1598679): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.345335 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators_eed68546-4e6f-4551-95ab-7e870b098179_0(5adf9f204db1728f66ba85d8162d11e6f135fa604ba1bc1dbe225152f1598679): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.345362 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators_eed68546-4e6f-4551-95ab-7e870b098179_0(5adf9f204db1728f66ba85d8162d11e6f135fa604ba1bc1dbe225152f1598679): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.345419 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators(eed68546-4e6f-4551-95ab-7e870b098179)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators(eed68546-4e6f-4551-95ab-7e870b098179)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators_eed68546-4e6f-4551-95ab-7e870b098179_0(5adf9f204db1728f66ba85d8162d11e6f135fa604ba1bc1dbe225152f1598679): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" podUID="eed68546-4e6f-4551-95ab-7e870b098179" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.359605 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators_48ffb065-6bf7-4b9c-981e-f834ead82767_0(ecd4be0c20397cacb35828c08d64f1b367ca001701cf9ca5e47d6893c93d834c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.359681 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators_48ffb065-6bf7-4b9c-981e-f834ead82767_0(ecd4be0c20397cacb35828c08d64f1b367ca001701cf9ca5e47d6893c93d834c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.359710 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators_48ffb065-6bf7-4b9c-981e-f834ead82767_0(ecd4be0c20397cacb35828c08d64f1b367ca001701cf9ca5e47d6893c93d834c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.359776 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators(48ffb065-6bf7-4b9c-981e-f834ead82767)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators(48ffb065-6bf7-4b9c-981e-f834ead82767)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators_48ffb065-6bf7-4b9c-981e-f834ead82767_0(ecd4be0c20397cacb35828c08d64f1b367ca001701cf9ca5e47d6893c93d834c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" podUID="48ffb065-6bf7-4b9c-981e-f834ead82767" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.364041 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/69126409-4642-4d42-855d-e7325b3de7c5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-skn2q\" (UID: \"69126409-4642-4d42-855d-e7325b3de7c5\") " pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.364185 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25s28\" (UniqueName: \"kubernetes.io/projected/69126409-4642-4d42-855d-e7325b3de7c5-kube-api-access-25s28\") pod \"observability-operator-59bdc8b94-skn2q\" (UID: \"69126409-4642-4d42-855d-e7325b3de7c5\") " pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.367556 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/69126409-4642-4d42-855d-e7325b3de7c5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-skn2q\" (UID: \"69126409-4642-4d42-855d-e7325b3de7c5\") " pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.385637 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25s28\" (UniqueName: \"kubernetes.io/projected/69126409-4642-4d42-855d-e7325b3de7c5-kube-api-access-25s28\") pod \"observability-operator-59bdc8b94-skn2q\" (UID: \"69126409-4642-4d42-855d-e7325b3de7c5\") " pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.415276 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-nfxjq"] Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.416126 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.453534 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-t7fxj" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.473545 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8z4j\" (UniqueName: \"kubernetes.io/projected/5b3c1908-cc42-4af3-a73d-916466d38dd6-kube-api-access-q8z4j\") pod \"perses-operator-5bf474d74f-nfxjq\" (UID: \"5b3c1908-cc42-4af3-a73d-916466d38dd6\") " pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.473792 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b3c1908-cc42-4af3-a73d-916466d38dd6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-nfxjq\" (UID: \"5b3c1908-cc42-4af3-a73d-916466d38dd6\") " pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.544136 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.574728 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8z4j\" (UniqueName: \"kubernetes.io/projected/5b3c1908-cc42-4af3-a73d-916466d38dd6-kube-api-access-q8z4j\") pod \"perses-operator-5bf474d74f-nfxjq\" (UID: \"5b3c1908-cc42-4af3-a73d-916466d38dd6\") " pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.574867 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b3c1908-cc42-4af3-a73d-916466d38dd6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-nfxjq\" (UID: \"5b3c1908-cc42-4af3-a73d-916466d38dd6\") " pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.575682 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b3c1908-cc42-4af3-a73d-916466d38dd6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-nfxjq\" (UID: \"5b3c1908-cc42-4af3-a73d-916466d38dd6\") " pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.583569 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-skn2q_openshift-operators_69126409-4642-4d42-855d-e7325b3de7c5_0(fc78c808d9f7c43d820b7fe4c5f0b29f64975830cc09bba3b36a4d9e1da29f43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.583644 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-skn2q_openshift-operators_69126409-4642-4d42-855d-e7325b3de7c5_0(fc78c808d9f7c43d820b7fe4c5f0b29f64975830cc09bba3b36a4d9e1da29f43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.583666 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-skn2q_openshift-operators_69126409-4642-4d42-855d-e7325b3de7c5_0(fc78c808d9f7c43d820b7fe4c5f0b29f64975830cc09bba3b36a4d9e1da29f43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.583706 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-skn2q_openshift-operators(69126409-4642-4d42-855d-e7325b3de7c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-skn2q_openshift-operators(69126409-4642-4d42-855d-e7325b3de7c5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-skn2q_openshift-operators_69126409-4642-4d42-855d-e7325b3de7c5_0(fc78c808d9f7c43d820b7fe4c5f0b29f64975830cc09bba3b36a4d9e1da29f43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.603764 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8z4j\" (UniqueName: \"kubernetes.io/projected/5b3c1908-cc42-4af3-a73d-916466d38dd6-kube-api-access-q8z4j\") pod \"perses-operator-5bf474d74f-nfxjq\" (UID: \"5b3c1908-cc42-4af3-a73d-916466d38dd6\") " pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.730866 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.776549 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-nfxjq_openshift-operators_5b3c1908-cc42-4af3-a73d-916466d38dd6_0(b171c2cd6cc320e8cceb16f8704babfac2a257c8885da710373349c500e0c805): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.776636 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-nfxjq_openshift-operators_5b3c1908-cc42-4af3-a73d-916466d38dd6_0(b171c2cd6cc320e8cceb16f8704babfac2a257c8885da710373349c500e0c805): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.776657 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-nfxjq_openshift-operators_5b3c1908-cc42-4af3-a73d-916466d38dd6_0(b171c2cd6cc320e8cceb16f8704babfac2a257c8885da710373349c500e0c805): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:02 crc kubenswrapper[4803]: E0127 22:00:02.776696 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-nfxjq_openshift-operators(5b3c1908-cc42-4af3-a73d-916466d38dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-nfxjq_openshift-operators(5b3c1908-cc42-4af3-a73d-916466d38dd6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-nfxjq_openshift-operators_5b3c1908-cc42-4af3-a73d-916466d38dd6_0(b171c2cd6cc320e8cceb16f8704babfac2a257c8885da710373349c500e0c805): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" podUID="5b3c1908-cc42-4af3-a73d-916466d38dd6" Jan 27 22:00:02 crc kubenswrapper[4803]: I0127 22:00:02.915540 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" event={"ID":"c87b2272-16b5-4b53-9a41-f53e22f176b7","Type":"ContainerStarted","Data":"94f4ae5ef3db929f8cd6fc4121e8022d37209b1916f0b01d4c3c24afcfd0cacb"} Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.928712 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" event={"ID":"c87b2272-16b5-4b53-9a41-f53e22f176b7","Type":"ContainerStarted","Data":"6de2f026ae183413d8d8fe24a098c12722cc5ac3efdc94834072f1094076ef0d"} Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.930022 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.930067 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.930200 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.958496 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.958580 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.961292 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" podStartSLOduration=7.961282706 podStartE2EDuration="7.961282706s" podCreationTimestamp="2026-01-27 21:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:00:04.959754494 +0000 UTC m=+757.375776193" watchObservedRunningTime="2026-01-27 22:00:04.961282706 +0000 UTC m=+757.377304405" Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.989878 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv"] Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.990065 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.990595 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.997696 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg"] Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.997886 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:04 crc kubenswrapper[4803]: I0127 22:00:04.998383 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.003116 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-skn2q"] Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.003350 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.003996 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.022921 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm"] Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.023072 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.023621 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.037059 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators_eed68546-4e6f-4551-95ab-7e870b098179_0(94eb9ed73a759f4905471c61804028edd40ff23d51ae729ef2e27a5dde8a853d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.037121 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators_eed68546-4e6f-4551-95ab-7e870b098179_0(94eb9ed73a759f4905471c61804028edd40ff23d51ae729ef2e27a5dde8a853d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.037143 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators_eed68546-4e6f-4551-95ab-7e870b098179_0(94eb9ed73a759f4905471c61804028edd40ff23d51ae729ef2e27a5dde8a853d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.037202 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators(eed68546-4e6f-4551-95ab-7e870b098179)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators(eed68546-4e6f-4551-95ab-7e870b098179)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_openshift-operators_eed68546-4e6f-4551-95ab-7e870b098179_0(94eb9ed73a759f4905471c61804028edd40ff23d51ae729ef2e27a5dde8a853d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" podUID="eed68546-4e6f-4551-95ab-7e870b098179" Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.056901 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-nfxjq"] Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.057007 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.057471 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.061598 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7"] Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.073989 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators_67bbe061-3ab2-43cf-9579-900c0ff65da9_0(9606be669384ceb7269a99b0c99ac340e5a88d78489c6d64a92d63ab9fac7c66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.076264 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators_67bbe061-3ab2-43cf-9579-900c0ff65da9_0(9606be669384ceb7269a99b0c99ac340e5a88d78489c6d64a92d63ab9fac7c66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.076299 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators_67bbe061-3ab2-43cf-9579-900c0ff65da9_0(9606be669384ceb7269a99b0c99ac340e5a88d78489c6d64a92d63ab9fac7c66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.076348 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators(67bbe061-3ab2-43cf-9579-900c0ff65da9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators(67bbe061-3ab2-43cf-9579-900c0ff65da9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qtnmg_openshift-operators_67bbe061-3ab2-43cf-9579-900c0ff65da9_0(9606be669384ceb7269a99b0c99ac340e5a88d78489c6d64a92d63ab9fac7c66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" podUID="67bbe061-3ab2-43cf-9579-900c0ff65da9" Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.076362 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:05 crc kubenswrapper[4803]: I0127 22:00:05.077207 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.083460 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-skn2q_openshift-operators_69126409-4642-4d42-855d-e7325b3de7c5_0(25ca58e05d60dc78a98bfaf8e935812738c148d80ba29761b3310ff94cc23f3c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.083528 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-skn2q_openshift-operators_69126409-4642-4d42-855d-e7325b3de7c5_0(25ca58e05d60dc78a98bfaf8e935812738c148d80ba29761b3310ff94cc23f3c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.083562 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-skn2q_openshift-operators_69126409-4642-4d42-855d-e7325b3de7c5_0(25ca58e05d60dc78a98bfaf8e935812738c148d80ba29761b3310ff94cc23f3c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.083610 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-skn2q_openshift-operators(69126409-4642-4d42-855d-e7325b3de7c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-skn2q_openshift-operators(69126409-4642-4d42-855d-e7325b3de7c5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-skn2q_openshift-operators_69126409-4642-4d42-855d-e7325b3de7c5_0(25ca58e05d60dc78a98bfaf8e935812738c148d80ba29761b3310ff94cc23f3c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.110910 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager_c9ce2383-1f63-4adf-964b-2e6769ac9957_0(d2fea72fa836b15cb3ff699f27e40051de4b8f158c8c42bae76622f9acb70893): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.111063 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager_c9ce2383-1f63-4adf-964b-2e6769ac9957_0(d2fea72fa836b15cb3ff699f27e40051de4b8f158c8c42bae76622f9acb70893): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.111158 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager_c9ce2383-1f63-4adf-964b-2e6769ac9957_0(d2fea72fa836b15cb3ff699f27e40051de4b8f158c8c42bae76622f9acb70893): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.111264 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager(c9ce2383-1f63-4adf-964b-2e6769ac9957)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager(c9ce2383-1f63-4adf-964b-2e6769ac9957)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29492520-bkldm_openshift-operator-lifecycle-manager_c9ce2383-1f63-4adf-964b-2e6769ac9957_0(d2fea72fa836b15cb3ff699f27e40051de4b8f158c8c42bae76622f9acb70893): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" podUID="c9ce2383-1f63-4adf-964b-2e6769ac9957" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.166058 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators_48ffb065-6bf7-4b9c-981e-f834ead82767_0(9dacb5ebf60794081ce0ff95ce0ac5fcadfdc88e4eec70e704de76a4514c908e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.166496 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators_48ffb065-6bf7-4b9c-981e-f834ead82767_0(9dacb5ebf60794081ce0ff95ce0ac5fcadfdc88e4eec70e704de76a4514c908e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.166612 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators_48ffb065-6bf7-4b9c-981e-f834ead82767_0(9dacb5ebf60794081ce0ff95ce0ac5fcadfdc88e4eec70e704de76a4514c908e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.166720 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators(48ffb065-6bf7-4b9c-981e-f834ead82767)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators(48ffb065-6bf7-4b9c-981e-f834ead82767)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_openshift-operators_48ffb065-6bf7-4b9c-981e-f834ead82767_0(9dacb5ebf60794081ce0ff95ce0ac5fcadfdc88e4eec70e704de76a4514c908e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" podUID="48ffb065-6bf7-4b9c-981e-f834ead82767" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.179471 4803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-nfxjq_openshift-operators_5b3c1908-cc42-4af3-a73d-916466d38dd6_0(0bc205b906e383cd1d2863165bd5245cf04233be0a60eec8291032944886845d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.179554 4803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-nfxjq_openshift-operators_5b3c1908-cc42-4af3-a73d-916466d38dd6_0(0bc205b906e383cd1d2863165bd5245cf04233be0a60eec8291032944886845d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.179578 4803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-nfxjq_openshift-operators_5b3c1908-cc42-4af3-a73d-916466d38dd6_0(0bc205b906e383cd1d2863165bd5245cf04233be0a60eec8291032944886845d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:05 crc kubenswrapper[4803]: E0127 22:00:05.179626 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-nfxjq_openshift-operators(5b3c1908-cc42-4af3-a73d-916466d38dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-nfxjq_openshift-operators(5b3c1908-cc42-4af3-a73d-916466d38dd6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-nfxjq_openshift-operators_5b3c1908-cc42-4af3-a73d-916466d38dd6_0(0bc205b906e383cd1d2863165bd5245cf04233be0a60eec8291032944886845d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" podUID="5b3c1908-cc42-4af3-a73d-916466d38dd6" Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.317013 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.319392 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.345366 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.345429 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.345479 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.346035 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95521df131317a8fb1bb4697014746e375ef67b38dfab0db8cdee522c9087edc"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.346084 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://95521df131317a8fb1bb4697014746e375ef67b38dfab0db8cdee522c9087edc" gracePeriod=600 Jan 27 22:00:16 crc kubenswrapper[4803]: E0127 22:00:16.542809 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeb23e3d_ee70_4f1d_85c0_005373cca336.slice/crio-conmon-95521df131317a8fb1bb4697014746e375ef67b38dfab0db8cdee522c9087edc.scope\": RecentStats: unable to find data in memory cache]" Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.561776 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv"] Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.990618 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="95521df131317a8fb1bb4697014746e375ef67b38dfab0db8cdee522c9087edc" exitCode=0 Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.990677 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"95521df131317a8fb1bb4697014746e375ef67b38dfab0db8cdee522c9087edc"} Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.991043 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"b9f834f520954d1f715c48108c608cf768b5ff78d5b3a0ccfc176c140c448267"} Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.991071 4803 scope.go:117] "RemoveContainer" containerID="e8efaf7b446df272e0996a17c38530d9da7be7bbc83602d505bce00b2e3d7c50" Jan 27 22:00:16 crc kubenswrapper[4803]: I0127 22:00:16.993226 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" event={"ID":"eed68546-4e6f-4551-95ab-7e870b098179","Type":"ContainerStarted","Data":"c785c8d5b589be2c0a6383ad83567f11aa4f95da69fcc8a2a5694c1eba9149f4"} Jan 27 22:00:18 crc kubenswrapper[4803]: I0127 22:00:18.308951 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:18 crc kubenswrapper[4803]: I0127 22:00:18.309373 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:18 crc kubenswrapper[4803]: I0127 22:00:18.310021 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:18 crc kubenswrapper[4803]: I0127 22:00:18.313319 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:18 crc kubenswrapper[4803]: I0127 22:00:18.314079 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:18 crc kubenswrapper[4803]: I0127 22:00:18.314539 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" Jan 27 22:00:18 crc kubenswrapper[4803]: I0127 22:00:18.818377 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg"] Jan 27 22:00:18 crc kubenswrapper[4803]: I0127 22:00:18.876834 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-skn2q"] Jan 27 22:00:18 crc kubenswrapper[4803]: I0127 22:00:18.933245 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-nfxjq"] Jan 27 22:00:19 crc kubenswrapper[4803]: I0127 22:00:19.013495 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" event={"ID":"69126409-4642-4d42-855d-e7325b3de7c5","Type":"ContainerStarted","Data":"f90cf4455944c24c8c0bb153979903b97ed6a031cf729fdf16fc6d9ba32eb390"} Jan 27 22:00:19 crc kubenswrapper[4803]: I0127 22:00:19.020813 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" event={"ID":"5b3c1908-cc42-4af3-a73d-916466d38dd6","Type":"ContainerStarted","Data":"f0f46550fc8a56d7b999ad23a32ab3e61beb56d5f55bfcb45be1cfcf5e9c15c9"} Jan 27 22:00:19 crc kubenswrapper[4803]: I0127 22:00:19.023323 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" event={"ID":"67bbe061-3ab2-43cf-9579-900c0ff65da9","Type":"ContainerStarted","Data":"252cf2b3eb873f0c106abc9b567d147681cc10e62213ffbf116af613b99e78d9"} Jan 27 22:00:19 crc kubenswrapper[4803]: I0127 22:00:19.305913 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:19 crc kubenswrapper[4803]: I0127 22:00:19.306764 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:20 crc kubenswrapper[4803]: I0127 22:00:20.306073 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:20 crc kubenswrapper[4803]: I0127 22:00:20.306877 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" Jan 27 22:00:21 crc kubenswrapper[4803]: I0127 22:00:21.463540 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm"] Jan 27 22:00:21 crc kubenswrapper[4803]: I0127 22:00:21.478104 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7"] Jan 27 22:00:21 crc kubenswrapper[4803]: W0127 22:00:21.505047 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48ffb065_6bf7_4b9c_981e_f834ead82767.slice/crio-a1f8408d9281ec9e7b9bb2caf1107c68a6b967fb73759b9236a3865629e356db WatchSource:0}: Error finding container a1f8408d9281ec9e7b9bb2caf1107c68a6b967fb73759b9236a3865629e356db: Status 404 returned error can't find the container with id a1f8408d9281ec9e7b9bb2caf1107c68a6b967fb73759b9236a3865629e356db Jan 27 22:00:22 crc kubenswrapper[4803]: I0127 22:00:22.045020 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" event={"ID":"c9ce2383-1f63-4adf-964b-2e6769ac9957","Type":"ContainerStarted","Data":"807bc661dd0b6469405200105ab60186dfcdd50ecd313b997761053da687d23b"} Jan 27 22:00:22 crc kubenswrapper[4803]: I0127 22:00:22.054560 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" event={"ID":"48ffb065-6bf7-4b9c-981e-f834ead82767","Type":"ContainerStarted","Data":"a1f8408d9281ec9e7b9bb2caf1107c68a6b967fb73759b9236a3865629e356db"} Jan 27 22:00:22 crc kubenswrapper[4803]: I0127 22:00:22.063077 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" event={"ID":"eed68546-4e6f-4551-95ab-7e870b098179","Type":"ContainerStarted","Data":"ac8085a1a98a2911c72e10725b8e2d773de7037b830e58f4646139ee572c8631"} Jan 27 22:00:22 crc kubenswrapper[4803]: I0127 22:00:22.090090 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-v75wv" podStartSLOduration=16.531806626 podStartE2EDuration="21.09007313s" podCreationTimestamp="2026-01-27 22:00:01 +0000 UTC" firstStartedPulling="2026-01-27 22:00:16.57867378 +0000 UTC m=+768.994695479" lastFinishedPulling="2026-01-27 22:00:21.136940284 +0000 UTC m=+773.552961983" observedRunningTime="2026-01-27 22:00:22.079698891 +0000 UTC m=+774.495720590" watchObservedRunningTime="2026-01-27 22:00:22.09007313 +0000 UTC m=+774.506094829" Jan 27 22:00:24 crc kubenswrapper[4803]: I0127 22:00:24.079760 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" event={"ID":"48ffb065-6bf7-4b9c-981e-f834ead82767","Type":"ContainerStarted","Data":"6b83eb4d2838d5431c4e2a3e688c8db916e51fff6598f477d02bffdf2b98a2ca"} Jan 27 22:00:24 crc kubenswrapper[4803]: I0127 22:00:24.083205 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" event={"ID":"5b3c1908-cc42-4af3-a73d-916466d38dd6","Type":"ContainerStarted","Data":"faf5448babcfff9690e5c9644eb3f73ff45beafa8c8aa403b12c4599f0d015f5"} Jan 27 22:00:24 crc kubenswrapper[4803]: I0127 22:00:24.083759 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:24 crc kubenswrapper[4803]: I0127 22:00:24.090594 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" event={"ID":"67bbe061-3ab2-43cf-9579-900c0ff65da9","Type":"ContainerStarted","Data":"b86740787f586b38f91a62d8ad27527434232c090ddfd22ed5bf7b1b51cd4325"} Jan 27 22:00:24 crc kubenswrapper[4803]: I0127 22:00:24.096697 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" event={"ID":"c9ce2383-1f63-4adf-964b-2e6769ac9957","Type":"ContainerStarted","Data":"af0604cdc2a15ae5769e54836ad8d72d933fafa6ccb30db1ea870d4dde063135"} Jan 27 22:00:24 crc kubenswrapper[4803]: I0127 22:00:24.162413 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" podStartSLOduration=24.162393031 podStartE2EDuration="24.162393031s" podCreationTimestamp="2026-01-27 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:00:24.157817237 +0000 UTC m=+776.573838956" watchObservedRunningTime="2026-01-27 22:00:24.162393031 +0000 UTC m=+776.578414740" Jan 27 22:00:24 crc kubenswrapper[4803]: I0127 22:00:24.163094 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7" podStartSLOduration=23.163087819 podStartE2EDuration="23.163087819s" podCreationTimestamp="2026-01-27 22:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:00:24.135614339 +0000 UTC m=+776.551636048" watchObservedRunningTime="2026-01-27 22:00:24.163087819 +0000 UTC m=+776.579109518" Jan 27 22:00:24 crc kubenswrapper[4803]: I0127 22:00:24.197237 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qtnmg" podStartSLOduration=18.609744937 podStartE2EDuration="23.197216648s" podCreationTimestamp="2026-01-27 22:00:01 +0000 UTC" firstStartedPulling="2026-01-27 22:00:18.843937244 +0000 UTC m=+771.259958943" lastFinishedPulling="2026-01-27 22:00:23.431408955 +0000 UTC m=+775.847430654" observedRunningTime="2026-01-27 22:00:24.196251252 +0000 UTC m=+776.612272951" watchObservedRunningTime="2026-01-27 22:00:24.197216648 +0000 UTC m=+776.613238347" Jan 27 22:00:24 crc kubenswrapper[4803]: I0127 22:00:24.224439 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" podStartSLOduration=17.734147996 podStartE2EDuration="22.224394139s" podCreationTimestamp="2026-01-27 22:00:02 +0000 UTC" firstStartedPulling="2026-01-27 22:00:18.939121196 +0000 UTC m=+771.355142895" lastFinishedPulling="2026-01-27 22:00:23.429367339 +0000 UTC m=+775.845389038" observedRunningTime="2026-01-27 22:00:24.220171676 +0000 UTC m=+776.636193385" watchObservedRunningTime="2026-01-27 22:00:24.224394139 +0000 UTC m=+776.640415838" Jan 27 22:00:25 crc kubenswrapper[4803]: I0127 22:00:25.104558 4803 generic.go:334] "Generic (PLEG): container finished" podID="c9ce2383-1f63-4adf-964b-2e6769ac9957" containerID="af0604cdc2a15ae5769e54836ad8d72d933fafa6ccb30db1ea870d4dde063135" exitCode=0 Jan 27 22:00:25 crc kubenswrapper[4803]: I0127 22:00:25.104646 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" event={"ID":"c9ce2383-1f63-4adf-964b-2e6769ac9957","Type":"ContainerDied","Data":"af0604cdc2a15ae5769e54836ad8d72d933fafa6ccb30db1ea870d4dde063135"} Jan 27 22:00:26 crc kubenswrapper[4803]: I0127 22:00:26.557345 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:26 crc kubenswrapper[4803]: I0127 22:00:26.657229 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9ce2383-1f63-4adf-964b-2e6769ac9957-secret-volume\") pod \"c9ce2383-1f63-4adf-964b-2e6769ac9957\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " Jan 27 22:00:26 crc kubenswrapper[4803]: I0127 22:00:26.657298 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-499rm\" (UniqueName: \"kubernetes.io/projected/c9ce2383-1f63-4adf-964b-2e6769ac9957-kube-api-access-499rm\") pod \"c9ce2383-1f63-4adf-964b-2e6769ac9957\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " Jan 27 22:00:26 crc kubenswrapper[4803]: I0127 22:00:26.657350 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9ce2383-1f63-4adf-964b-2e6769ac9957-config-volume\") pod \"c9ce2383-1f63-4adf-964b-2e6769ac9957\" (UID: \"c9ce2383-1f63-4adf-964b-2e6769ac9957\") " Jan 27 22:00:26 crc kubenswrapper[4803]: I0127 22:00:26.658341 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ce2383-1f63-4adf-964b-2e6769ac9957-config-volume" (OuterVolumeSpecName: "config-volume") pod "c9ce2383-1f63-4adf-964b-2e6769ac9957" (UID: "c9ce2383-1f63-4adf-964b-2e6769ac9957"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:00:26 crc kubenswrapper[4803]: I0127 22:00:26.674040 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ce2383-1f63-4adf-964b-2e6769ac9957-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c9ce2383-1f63-4adf-964b-2e6769ac9957" (UID: "c9ce2383-1f63-4adf-964b-2e6769ac9957"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:00:26 crc kubenswrapper[4803]: I0127 22:00:26.681239 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ce2383-1f63-4adf-964b-2e6769ac9957-kube-api-access-499rm" (OuterVolumeSpecName: "kube-api-access-499rm") pod "c9ce2383-1f63-4adf-964b-2e6769ac9957" (UID: "c9ce2383-1f63-4adf-964b-2e6769ac9957"). InnerVolumeSpecName "kube-api-access-499rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:00:26 crc kubenswrapper[4803]: I0127 22:00:26.759590 4803 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9ce2383-1f63-4adf-964b-2e6769ac9957-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 22:00:26 crc kubenswrapper[4803]: I0127 22:00:26.759649 4803 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9ce2383-1f63-4adf-964b-2e6769ac9957-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 22:00:26 crc kubenswrapper[4803]: I0127 22:00:26.759663 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-499rm\" (UniqueName: \"kubernetes.io/projected/c9ce2383-1f63-4adf-964b-2e6769ac9957-kube-api-access-499rm\") on node \"crc\" DevicePath \"\"" Jan 27 22:00:27 crc kubenswrapper[4803]: I0127 22:00:27.119041 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" event={"ID":"c9ce2383-1f63-4adf-964b-2e6769ac9957","Type":"ContainerDied","Data":"807bc661dd0b6469405200105ab60186dfcdd50ecd313b997761053da687d23b"} Jan 27 22:00:27 crc kubenswrapper[4803]: I0127 22:00:27.119282 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="807bc661dd0b6469405200105ab60186dfcdd50ecd313b997761053da687d23b" Jan 27 22:00:27 crc kubenswrapper[4803]: I0127 22:00:27.119151 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm" Jan 27 22:00:27 crc kubenswrapper[4803]: I0127 22:00:27.795237 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24rf7" Jan 27 22:00:28 crc kubenswrapper[4803]: I0127 22:00:28.125878 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" event={"ID":"69126409-4642-4d42-855d-e7325b3de7c5","Type":"ContainerStarted","Data":"cd22b2e4ca8aa1dbc483fc088e5ece9d993383c7668255cf22bf0281a9f959a9"} Jan 27 22:00:28 crc kubenswrapper[4803]: I0127 22:00:28.126300 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:28 crc kubenswrapper[4803]: I0127 22:00:28.128421 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 22:00:28 crc kubenswrapper[4803]: I0127 22:00:28.143011 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podStartSLOduration=18.3903462 podStartE2EDuration="26.142990847s" podCreationTimestamp="2026-01-27 22:00:02 +0000 UTC" firstStartedPulling="2026-01-27 22:00:18.884432964 +0000 UTC m=+771.300454653" lastFinishedPulling="2026-01-27 22:00:26.637077601 +0000 UTC m=+779.053099300" observedRunningTime="2026-01-27 22:00:28.140063228 +0000 UTC m=+780.556084937" watchObservedRunningTime="2026-01-27 22:00:28.142990847 +0000 UTC m=+780.559012556" Jan 27 22:00:32 crc kubenswrapper[4803]: I0127 22:00:32.734296 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.629083 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-4st4q"] Jan 27 22:00:34 crc kubenswrapper[4803]: E0127 22:00:34.629437 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ce2383-1f63-4adf-964b-2e6769ac9957" containerName="collect-profiles" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.629453 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ce2383-1f63-4adf-964b-2e6769ac9957" containerName="collect-profiles" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.629591 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ce2383-1f63-4adf-964b-2e6769ac9957" containerName="collect-profiles" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.630059 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-4st4q" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.631930 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.632115 4803 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7bfkf" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.632309 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.633524 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mvwfx"] Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.634260 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvwfx" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.638952 4803 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-mh8s2" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.644564 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-99277"] Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.645568 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.647319 4803 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-fqmv4" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.648821 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mvwfx"] Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.659640 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-4st4q"] Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.688781 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-99277"] Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.691314 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpchw\" (UniqueName: \"kubernetes.io/projected/601f83b4-7a3d-49fe-9674-58798267d78c-kube-api-access-xpchw\") pod \"cert-manager-cainjector-cf98fcc89-mvwfx\" (UID: \"601f83b4-7a3d-49fe-9674-58798267d78c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvwfx" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.792138 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpchw\" (UniqueName: \"kubernetes.io/projected/601f83b4-7a3d-49fe-9674-58798267d78c-kube-api-access-xpchw\") pod \"cert-manager-cainjector-cf98fcc89-mvwfx\" (UID: \"601f83b4-7a3d-49fe-9674-58798267d78c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvwfx" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.792186 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shnft\" (UniqueName: \"kubernetes.io/projected/021b5278-1b81-43b3-ae44-ec231fb77687-kube-api-access-shnft\") pod \"cert-manager-webhook-687f57d79b-99277\" (UID: \"021b5278-1b81-43b3-ae44-ec231fb77687\") " pod="cert-manager/cert-manager-webhook-687f57d79b-99277" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.792238 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkg82\" (UniqueName: \"kubernetes.io/projected/3354336a-cebc-4270-8c96-379cfa5682b8-kube-api-access-jkg82\") pod \"cert-manager-858654f9db-4st4q\" (UID: \"3354336a-cebc-4270-8c96-379cfa5682b8\") " pod="cert-manager/cert-manager-858654f9db-4st4q" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.814486 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpchw\" (UniqueName: \"kubernetes.io/projected/601f83b4-7a3d-49fe-9674-58798267d78c-kube-api-access-xpchw\") pod \"cert-manager-cainjector-cf98fcc89-mvwfx\" (UID: \"601f83b4-7a3d-49fe-9674-58798267d78c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvwfx" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.893414 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkg82\" (UniqueName: \"kubernetes.io/projected/3354336a-cebc-4270-8c96-379cfa5682b8-kube-api-access-jkg82\") pod \"cert-manager-858654f9db-4st4q\" (UID: \"3354336a-cebc-4270-8c96-379cfa5682b8\") " pod="cert-manager/cert-manager-858654f9db-4st4q" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.893510 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shnft\" (UniqueName: \"kubernetes.io/projected/021b5278-1b81-43b3-ae44-ec231fb77687-kube-api-access-shnft\") pod \"cert-manager-webhook-687f57d79b-99277\" (UID: \"021b5278-1b81-43b3-ae44-ec231fb77687\") " pod="cert-manager/cert-manager-webhook-687f57d79b-99277" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.913842 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shnft\" (UniqueName: \"kubernetes.io/projected/021b5278-1b81-43b3-ae44-ec231fb77687-kube-api-access-shnft\") pod \"cert-manager-webhook-687f57d79b-99277\" (UID: \"021b5278-1b81-43b3-ae44-ec231fb77687\") " pod="cert-manager/cert-manager-webhook-687f57d79b-99277" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.914566 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkg82\" (UniqueName: \"kubernetes.io/projected/3354336a-cebc-4270-8c96-379cfa5682b8-kube-api-access-jkg82\") pod \"cert-manager-858654f9db-4st4q\" (UID: \"3354336a-cebc-4270-8c96-379cfa5682b8\") " pod="cert-manager/cert-manager-858654f9db-4st4q" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.945388 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-4st4q" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.968075 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvwfx" Jan 27 22:00:34 crc kubenswrapper[4803]: I0127 22:00:34.974398 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" Jan 27 22:00:35 crc kubenswrapper[4803]: I0127 22:00:35.345920 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-99277"] Jan 27 22:00:35 crc kubenswrapper[4803]: I0127 22:00:35.498496 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-4st4q"] Jan 27 22:00:35 crc kubenswrapper[4803]: W0127 22:00:35.500503 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3354336a_cebc_4270_8c96_379cfa5682b8.slice/crio-09b0fc5a6d33cce24ccf0677fb2ebf8a3b5b5faf5ebc38245ece654265b0ac24 WatchSource:0}: Error finding container 09b0fc5a6d33cce24ccf0677fb2ebf8a3b5b5faf5ebc38245ece654265b0ac24: Status 404 returned error can't find the container with id 09b0fc5a6d33cce24ccf0677fb2ebf8a3b5b5faf5ebc38245ece654265b0ac24 Jan 27 22:00:35 crc kubenswrapper[4803]: I0127 22:00:35.513544 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mvwfx"] Jan 27 22:00:36 crc kubenswrapper[4803]: I0127 22:00:36.170685 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" event={"ID":"021b5278-1b81-43b3-ae44-ec231fb77687","Type":"ContainerStarted","Data":"ac20395ca6d07c0aab2e812c17d30bac89ef55f9e3c588de5a030bba9fc602c3"} Jan 27 22:00:36 crc kubenswrapper[4803]: I0127 22:00:36.171791 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-4st4q" event={"ID":"3354336a-cebc-4270-8c96-379cfa5682b8","Type":"ContainerStarted","Data":"09b0fc5a6d33cce24ccf0677fb2ebf8a3b5b5faf5ebc38245ece654265b0ac24"} Jan 27 22:00:36 crc kubenswrapper[4803]: I0127 22:00:36.173203 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvwfx" event={"ID":"601f83b4-7a3d-49fe-9674-58798267d78c","Type":"ContainerStarted","Data":"5b974bafdf23b9b183c2c9a4693d39204d4160211d9e8c0d17bb3e8a7337972a"} Jan 27 22:00:40 crc kubenswrapper[4803]: I0127 22:00:40.222783 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvwfx" event={"ID":"601f83b4-7a3d-49fe-9674-58798267d78c","Type":"ContainerStarted","Data":"96e29cdfbcb1bce1760d31567ab01e7856a0587972c5084a4253ae115b155082"} Jan 27 22:00:40 crc kubenswrapper[4803]: I0127 22:00:40.224629 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-4st4q" event={"ID":"3354336a-cebc-4270-8c96-379cfa5682b8","Type":"ContainerStarted","Data":"45a0b80511d33ebf251f84098ddd9ff50d5c0f355cebfde36d31e32f466d7eac"} Jan 27 22:00:40 crc kubenswrapper[4803]: I0127 22:00:40.225974 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" event={"ID":"021b5278-1b81-43b3-ae44-ec231fb77687","Type":"ContainerStarted","Data":"66c7ae6af45ed4e9c0dd77878a93218285fc03436973b873d93fdca464388218"} Jan 27 22:00:40 crc kubenswrapper[4803]: I0127 22:00:40.226117 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" Jan 27 22:00:40 crc kubenswrapper[4803]: I0127 22:00:40.244475 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvwfx" podStartSLOduration=2.005400249 podStartE2EDuration="6.244458482s" podCreationTimestamp="2026-01-27 22:00:34 +0000 UTC" firstStartedPulling="2026-01-27 22:00:35.518273557 +0000 UTC m=+787.934295256" lastFinishedPulling="2026-01-27 22:00:39.7573318 +0000 UTC m=+792.173353489" observedRunningTime="2026-01-27 22:00:40.241968175 +0000 UTC m=+792.657989874" watchObservedRunningTime="2026-01-27 22:00:40.244458482 +0000 UTC m=+792.660480181" Jan 27 22:00:40 crc kubenswrapper[4803]: I0127 22:00:40.282699 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-4st4q" podStartSLOduration=2.021639786 podStartE2EDuration="6.28267816s" podCreationTimestamp="2026-01-27 22:00:34 +0000 UTC" firstStartedPulling="2026-01-27 22:00:35.502522943 +0000 UTC m=+787.918544642" lastFinishedPulling="2026-01-27 22:00:39.763561307 +0000 UTC m=+792.179583016" observedRunningTime="2026-01-27 22:00:40.279133665 +0000 UTC m=+792.695155364" watchObservedRunningTime="2026-01-27 22:00:40.28267816 +0000 UTC m=+792.698699859" Jan 27 22:00:40 crc kubenswrapper[4803]: I0127 22:00:40.308456 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" podStartSLOduration=1.9143045669999998 podStartE2EDuration="6.308439294s" podCreationTimestamp="2026-01-27 22:00:34 +0000 UTC" firstStartedPulling="2026-01-27 22:00:35.35417078 +0000 UTC m=+787.770192479" lastFinishedPulling="2026-01-27 22:00:39.748305497 +0000 UTC m=+792.164327206" observedRunningTime="2026-01-27 22:00:40.305189136 +0000 UTC m=+792.721210835" watchObservedRunningTime="2026-01-27 22:00:40.308439294 +0000 UTC m=+792.724460993" Jan 27 22:00:44 crc kubenswrapper[4803]: I0127 22:00:44.994917 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.109811 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5"] Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.111493 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.115766 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.121261 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5"] Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.278311 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.278380 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.278434 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkxkz\" (UniqueName: \"kubernetes.io/projected/3cb26d95-4b42-4f55-921c-390f8bb5853c-kube-api-access-jkxkz\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.310114 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv"] Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.311683 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.318061 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv"] Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.379628 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.379695 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.379911 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkxkz\" (UniqueName: \"kubernetes.io/projected/3cb26d95-4b42-4f55-921c-390f8bb5853c-kube-api-access-jkxkz\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.380114 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.380250 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.398999 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkxkz\" (UniqueName: \"kubernetes.io/projected/3cb26d95-4b42-4f55-921c-390f8bb5853c-kube-api-access-jkxkz\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.481448 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvk7b\" (UniqueName: \"kubernetes.io/projected/ef42d6f6-0acd-4bb0-aec2-a67189015527-kube-api-access-bvk7b\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.481809 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.481872 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.482094 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.583586 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvk7b\" (UniqueName: \"kubernetes.io/projected/ef42d6f6-0acd-4bb0-aec2-a67189015527-kube-api-access-bvk7b\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.583636 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.583671 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.584549 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.585555 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.620681 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvk7b\" (UniqueName: \"kubernetes.io/projected/ef42d6f6-0acd-4bb0-aec2-a67189015527-kube-api-access-bvk7b\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.635174 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:11 crc kubenswrapper[4803]: I0127 22:01:11.781246 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5"] Jan 27 22:01:12 crc kubenswrapper[4803]: I0127 22:01:12.076950 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv"] Jan 27 22:01:12 crc kubenswrapper[4803]: W0127 22:01:12.085332 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef42d6f6_0acd_4bb0_aec2_a67189015527.slice/crio-a367232fd5e5dbd29aa9eb8c8e8626dbc163bd8b5affc9430d9ef35f115a6733 WatchSource:0}: Error finding container a367232fd5e5dbd29aa9eb8c8e8626dbc163bd8b5affc9430d9ef35f115a6733: Status 404 returned error can't find the container with id a367232fd5e5dbd29aa9eb8c8e8626dbc163bd8b5affc9430d9ef35f115a6733 Jan 27 22:01:12 crc kubenswrapper[4803]: I0127 22:01:12.432254 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" event={"ID":"ef42d6f6-0acd-4bb0-aec2-a67189015527","Type":"ContainerStarted","Data":"a367232fd5e5dbd29aa9eb8c8e8626dbc163bd8b5affc9430d9ef35f115a6733"} Jan 27 22:01:12 crc kubenswrapper[4803]: I0127 22:01:12.433787 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" event={"ID":"3cb26d95-4b42-4f55-921c-390f8bb5853c","Type":"ContainerStarted","Data":"274370b66d93248dade88a7d1c93f7eb430d246fe090b9c544c3cccccb32cdef"} Jan 27 22:01:12 crc kubenswrapper[4803]: I0127 22:01:12.433808 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" event={"ID":"3cb26d95-4b42-4f55-921c-390f8bb5853c","Type":"ContainerStarted","Data":"ade8544812ce4f79ad2d03c344bbef741e52f1dd8756b9529ec67ded0a370042"} Jan 27 22:01:13 crc kubenswrapper[4803]: I0127 22:01:13.441066 4803 generic.go:334] "Generic (PLEG): container finished" podID="ef42d6f6-0acd-4bb0-aec2-a67189015527" containerID="d367976e0794f011a14257252a8c7a90562c84c9d90c167d5251166b9fbe7c33" exitCode=0 Jan 27 22:01:13 crc kubenswrapper[4803]: I0127 22:01:13.441140 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" event={"ID":"ef42d6f6-0acd-4bb0-aec2-a67189015527","Type":"ContainerDied","Data":"d367976e0794f011a14257252a8c7a90562c84c9d90c167d5251166b9fbe7c33"} Jan 27 22:01:13 crc kubenswrapper[4803]: I0127 22:01:13.442751 4803 generic.go:334] "Generic (PLEG): container finished" podID="3cb26d95-4b42-4f55-921c-390f8bb5853c" containerID="274370b66d93248dade88a7d1c93f7eb430d246fe090b9c544c3cccccb32cdef" exitCode=0 Jan 27 22:01:13 crc kubenswrapper[4803]: I0127 22:01:13.442799 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" event={"ID":"3cb26d95-4b42-4f55-921c-390f8bb5853c","Type":"ContainerDied","Data":"274370b66d93248dade88a7d1c93f7eb430d246fe090b9c544c3cccccb32cdef"} Jan 27 22:01:14 crc kubenswrapper[4803]: I0127 22:01:14.875705 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4n94b"] Jan 27 22:01:14 crc kubenswrapper[4803]: I0127 22:01:14.877728 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:14 crc kubenswrapper[4803]: I0127 22:01:14.951286 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4n94b"] Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.034168 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clgqz\" (UniqueName: \"kubernetes.io/projected/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-kube-api-access-clgqz\") pod \"redhat-operators-4n94b\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.034473 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-catalog-content\") pod \"redhat-operators-4n94b\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.034514 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-utilities\") pod \"redhat-operators-4n94b\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.135761 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-catalog-content\") pod \"redhat-operators-4n94b\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.135814 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-utilities\") pod \"redhat-operators-4n94b\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.135896 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clgqz\" (UniqueName: \"kubernetes.io/projected/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-kube-api-access-clgqz\") pod \"redhat-operators-4n94b\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.136277 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-utilities\") pod \"redhat-operators-4n94b\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.136322 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-catalog-content\") pod \"redhat-operators-4n94b\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.153996 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clgqz\" (UniqueName: \"kubernetes.io/projected/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-kube-api-access-clgqz\") pod \"redhat-operators-4n94b\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.219549 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.429607 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4n94b"] Jan 27 22:01:15 crc kubenswrapper[4803]: W0127 22:01:15.440177 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7a33dc3_ce4c_445b_8d71_9d9c3860302a.slice/crio-1bdc8d57a4cd007cb0107ffc5f0fd3c06eaecdf51cdbb1ac15c44f7add2f03bc WatchSource:0}: Error finding container 1bdc8d57a4cd007cb0107ffc5f0fd3c06eaecdf51cdbb1ac15c44f7add2f03bc: Status 404 returned error can't find the container with id 1bdc8d57a4cd007cb0107ffc5f0fd3c06eaecdf51cdbb1ac15c44f7add2f03bc Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.462555 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4n94b" event={"ID":"f7a33dc3-ce4c-445b-8d71-9d9c3860302a","Type":"ContainerStarted","Data":"1bdc8d57a4cd007cb0107ffc5f0fd3c06eaecdf51cdbb1ac15c44f7add2f03bc"} Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.464670 4803 generic.go:334] "Generic (PLEG): container finished" podID="ef42d6f6-0acd-4bb0-aec2-a67189015527" containerID="e60f80f8979c6122de721e836b8dfd152bccb526136fe946f7eacdb6477f2332" exitCode=0 Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.464712 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" event={"ID":"ef42d6f6-0acd-4bb0-aec2-a67189015527","Type":"ContainerDied","Data":"e60f80f8979c6122de721e836b8dfd152bccb526136fe946f7eacdb6477f2332"} Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.468902 4803 generic.go:334] "Generic (PLEG): container finished" podID="3cb26d95-4b42-4f55-921c-390f8bb5853c" containerID="2a609d87c23f4827f640869fbbc26a22f0985f70fd238ef10439be52ed2dd1f5" exitCode=0 Jan 27 22:01:15 crc kubenswrapper[4803]: I0127 22:01:15.468940 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" event={"ID":"3cb26d95-4b42-4f55-921c-390f8bb5853c","Type":"ContainerDied","Data":"2a609d87c23f4827f640869fbbc26a22f0985f70fd238ef10439be52ed2dd1f5"} Jan 27 22:01:16 crc kubenswrapper[4803]: I0127 22:01:16.476236 4803 generic.go:334] "Generic (PLEG): container finished" podID="ef42d6f6-0acd-4bb0-aec2-a67189015527" containerID="6f1ee4022b39b3fd82b992984ceb113843193a6841b22a29d18800d735a12e5f" exitCode=0 Jan 27 22:01:16 crc kubenswrapper[4803]: I0127 22:01:16.476328 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" event={"ID":"ef42d6f6-0acd-4bb0-aec2-a67189015527","Type":"ContainerDied","Data":"6f1ee4022b39b3fd82b992984ceb113843193a6841b22a29d18800d735a12e5f"} Jan 27 22:01:16 crc kubenswrapper[4803]: I0127 22:01:16.479959 4803 generic.go:334] "Generic (PLEG): container finished" podID="3cb26d95-4b42-4f55-921c-390f8bb5853c" containerID="5390a41719f698eafde58291496dddd4ca0721566109747f7f8cb6f9005fe164" exitCode=0 Jan 27 22:01:16 crc kubenswrapper[4803]: I0127 22:01:16.480050 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" event={"ID":"3cb26d95-4b42-4f55-921c-390f8bb5853c","Type":"ContainerDied","Data":"5390a41719f698eafde58291496dddd4ca0721566109747f7f8cb6f9005fe164"} Jan 27 22:01:16 crc kubenswrapper[4803]: I0127 22:01:16.481401 4803 generic.go:334] "Generic (PLEG): container finished" podID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerID="7e201ac2cf7f589b7a93be72a6f20c95c1629204a74c17bf81cd6c3c41bc3917" exitCode=0 Jan 27 22:01:16 crc kubenswrapper[4803]: I0127 22:01:16.481471 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4n94b" event={"ID":"f7a33dc3-ce4c-445b-8d71-9d9c3860302a","Type":"ContainerDied","Data":"7e201ac2cf7f589b7a93be72a6f20c95c1629204a74c17bf81cd6c3c41bc3917"} Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.764829 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.771356 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvk7b\" (UniqueName: \"kubernetes.io/projected/ef42d6f6-0acd-4bb0-aec2-a67189015527-kube-api-access-bvk7b\") pod \"ef42d6f6-0acd-4bb0-aec2-a67189015527\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.771533 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-util\") pod \"ef42d6f6-0acd-4bb0-aec2-a67189015527\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.771614 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-bundle\") pod \"ef42d6f6-0acd-4bb0-aec2-a67189015527\" (UID: \"ef42d6f6-0acd-4bb0-aec2-a67189015527\") " Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.772728 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-bundle" (OuterVolumeSpecName: "bundle") pod "ef42d6f6-0acd-4bb0-aec2-a67189015527" (UID: "ef42d6f6-0acd-4bb0-aec2-a67189015527"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.776220 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.783361 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef42d6f6-0acd-4bb0-aec2-a67189015527-kube-api-access-bvk7b" (OuterVolumeSpecName: "kube-api-access-bvk7b") pod "ef42d6f6-0acd-4bb0-aec2-a67189015527" (UID: "ef42d6f6-0acd-4bb0-aec2-a67189015527"). InnerVolumeSpecName "kube-api-access-bvk7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.790481 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-util" (OuterVolumeSpecName: "util") pod "ef42d6f6-0acd-4bb0-aec2-a67189015527" (UID: "ef42d6f6-0acd-4bb0-aec2-a67189015527"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.872569 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-bundle\") pod \"3cb26d95-4b42-4f55-921c-390f8bb5853c\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.873037 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkxkz\" (UniqueName: \"kubernetes.io/projected/3cb26d95-4b42-4f55-921c-390f8bb5853c-kube-api-access-jkxkz\") pod \"3cb26d95-4b42-4f55-921c-390f8bb5853c\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.873094 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-util\") pod \"3cb26d95-4b42-4f55-921c-390f8bb5853c\" (UID: \"3cb26d95-4b42-4f55-921c-390f8bb5853c\") " Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.873305 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvk7b\" (UniqueName: \"kubernetes.io/projected/ef42d6f6-0acd-4bb0-aec2-a67189015527-kube-api-access-bvk7b\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.873327 4803 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-util\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.873340 4803 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef42d6f6-0acd-4bb0-aec2-a67189015527-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.873815 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-bundle" (OuterVolumeSpecName: "bundle") pod "3cb26d95-4b42-4f55-921c-390f8bb5853c" (UID: "3cb26d95-4b42-4f55-921c-390f8bb5853c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.876827 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb26d95-4b42-4f55-921c-390f8bb5853c-kube-api-access-jkxkz" (OuterVolumeSpecName: "kube-api-access-jkxkz") pod "3cb26d95-4b42-4f55-921c-390f8bb5853c" (UID: "3cb26d95-4b42-4f55-921c-390f8bb5853c"). InnerVolumeSpecName "kube-api-access-jkxkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.884140 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-util" (OuterVolumeSpecName: "util") pod "3cb26d95-4b42-4f55-921c-390f8bb5853c" (UID: "3cb26d95-4b42-4f55-921c-390f8bb5853c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.975078 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkxkz\" (UniqueName: \"kubernetes.io/projected/3cb26d95-4b42-4f55-921c-390f8bb5853c-kube-api-access-jkxkz\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.975118 4803 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-util\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:17 crc kubenswrapper[4803]: I0127 22:01:17.975128 4803 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cb26d95-4b42-4f55-921c-390f8bb5853c-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:18 crc kubenswrapper[4803]: I0127 22:01:18.495128 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" event={"ID":"3cb26d95-4b42-4f55-921c-390f8bb5853c","Type":"ContainerDied","Data":"ade8544812ce4f79ad2d03c344bbef741e52f1dd8756b9529ec67ded0a370042"} Jan 27 22:01:18 crc kubenswrapper[4803]: I0127 22:01:18.495180 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ade8544812ce4f79ad2d03c344bbef741e52f1dd8756b9529ec67ded0a370042" Jan 27 22:01:18 crc kubenswrapper[4803]: I0127 22:01:18.495297 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5" Jan 27 22:01:18 crc kubenswrapper[4803]: I0127 22:01:18.498251 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4n94b" event={"ID":"f7a33dc3-ce4c-445b-8d71-9d9c3860302a","Type":"ContainerStarted","Data":"5b17dbbdf1db0199b595dbce831ba882628c725c7935129482bfb99921126699"} Jan 27 22:01:18 crc kubenswrapper[4803]: I0127 22:01:18.502124 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" event={"ID":"ef42d6f6-0acd-4bb0-aec2-a67189015527","Type":"ContainerDied","Data":"a367232fd5e5dbd29aa9eb8c8e8626dbc163bd8b5affc9430d9ef35f115a6733"} Jan 27 22:01:18 crc kubenswrapper[4803]: I0127 22:01:18.502159 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a367232fd5e5dbd29aa9eb8c8e8626dbc163bd8b5affc9430d9ef35f115a6733" Jan 27 22:01:18 crc kubenswrapper[4803]: I0127 22:01:18.502224 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv" Jan 27 22:01:19 crc kubenswrapper[4803]: I0127 22:01:19.508841 4803 generic.go:334] "Generic (PLEG): container finished" podID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerID="5b17dbbdf1db0199b595dbce831ba882628c725c7935129482bfb99921126699" exitCode=0 Jan 27 22:01:19 crc kubenswrapper[4803]: I0127 22:01:19.508940 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4n94b" event={"ID":"f7a33dc3-ce4c-445b-8d71-9d9c3860302a","Type":"ContainerDied","Data":"5b17dbbdf1db0199b595dbce831ba882628c725c7935129482bfb99921126699"} Jan 27 22:01:20 crc kubenswrapper[4803]: I0127 22:01:20.517893 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4n94b" event={"ID":"f7a33dc3-ce4c-445b-8d71-9d9c3860302a","Type":"ContainerStarted","Data":"7e859e6e1cec81e414106aa28b19de23b9f4d17877998ac58799c501912a99a5"} Jan 27 22:01:20 crc kubenswrapper[4803]: I0127 22:01:20.536776 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4n94b" podStartSLOduration=2.953291752 podStartE2EDuration="6.536757199s" podCreationTimestamp="2026-01-27 22:01:14 +0000 UTC" firstStartedPulling="2026-01-27 22:01:16.483161828 +0000 UTC m=+828.899183527" lastFinishedPulling="2026-01-27 22:01:20.066627275 +0000 UTC m=+832.482648974" observedRunningTime="2026-01-27 22:01:20.534609501 +0000 UTC m=+832.950631200" watchObservedRunningTime="2026-01-27 22:01:20.536757199 +0000 UTC m=+832.952778898" Jan 27 22:01:25 crc kubenswrapper[4803]: I0127 22:01:25.220076 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:25 crc kubenswrapper[4803]: I0127 22:01:25.220675 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:26 crc kubenswrapper[4803]: I0127 22:01:26.264348 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4n94b" podUID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerName="registry-server" probeResult="failure" output=< Jan 27 22:01:26 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:01:26 crc kubenswrapper[4803]: > Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.763512 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5"] Jan 27 22:01:28 crc kubenswrapper[4803]: E0127 22:01:28.764132 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb26d95-4b42-4f55-921c-390f8bb5853c" containerName="util" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.764146 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb26d95-4b42-4f55-921c-390f8bb5853c" containerName="util" Jan 27 22:01:28 crc kubenswrapper[4803]: E0127 22:01:28.764158 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb26d95-4b42-4f55-921c-390f8bb5853c" containerName="pull" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.764165 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb26d95-4b42-4f55-921c-390f8bb5853c" containerName="pull" Jan 27 22:01:28 crc kubenswrapper[4803]: E0127 22:01:28.764178 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef42d6f6-0acd-4bb0-aec2-a67189015527" containerName="util" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.764184 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef42d6f6-0acd-4bb0-aec2-a67189015527" containerName="util" Jan 27 22:01:28 crc kubenswrapper[4803]: E0127 22:01:28.764193 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef42d6f6-0acd-4bb0-aec2-a67189015527" containerName="pull" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.764200 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef42d6f6-0acd-4bb0-aec2-a67189015527" containerName="pull" Jan 27 22:01:28 crc kubenswrapper[4803]: E0127 22:01:28.764213 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb26d95-4b42-4f55-921c-390f8bb5853c" containerName="extract" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.764220 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb26d95-4b42-4f55-921c-390f8bb5853c" containerName="extract" Jan 27 22:01:28 crc kubenswrapper[4803]: E0127 22:01:28.764233 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef42d6f6-0acd-4bb0-aec2-a67189015527" containerName="extract" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.764240 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef42d6f6-0acd-4bb0-aec2-a67189015527" containerName="extract" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.764360 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef42d6f6-0acd-4bb0-aec2-a67189015527" containerName="extract" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.764374 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb26d95-4b42-4f55-921c-390f8bb5853c" containerName="extract" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.765241 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.769552 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.769645 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-ll7c7" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.769765 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.769955 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.769987 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.770047 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.775688 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5"] Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.826223 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-webhook-cert\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.826322 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.826345 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-manager-config\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.826361 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tn4h\" (UniqueName: \"kubernetes.io/projected/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-kube-api-access-5tn4h\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.826386 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-apiservice-cert\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.927379 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.928532 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-manager-config\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.928561 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tn4h\" (UniqueName: \"kubernetes.io/projected/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-kube-api-access-5tn4h\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.928594 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-apiservice-cert\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.928663 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-webhook-cert\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.929435 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-manager-config\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.935703 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-webhook-cert\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.936872 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-apiservice-cert\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.939635 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:28 crc kubenswrapper[4803]: I0127 22:01:28.951610 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tn4h\" (UniqueName: \"kubernetes.io/projected/51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d-kube-api-access-5tn4h\") pod \"loki-operator-controller-manager-b65d5f66c-f2bd5\" (UID: \"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:29 crc kubenswrapper[4803]: I0127 22:01:29.081089 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:29 crc kubenswrapper[4803]: I0127 22:01:29.497299 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5"] Jan 27 22:01:29 crc kubenswrapper[4803]: W0127 22:01:29.505549 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51ba4ac9_8ab7_4c28_83fe_6a3fbe40025d.slice/crio-b905b7495ee6aff4e76f72c2570e32c22a703bc5d7f1b876b5749f162a83a67d WatchSource:0}: Error finding container b905b7495ee6aff4e76f72c2570e32c22a703bc5d7f1b876b5749f162a83a67d: Status 404 returned error can't find the container with id b905b7495ee6aff4e76f72c2570e32c22a703bc5d7f1b876b5749f162a83a67d Jan 27 22:01:29 crc kubenswrapper[4803]: I0127 22:01:29.591222 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" event={"ID":"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d","Type":"ContainerStarted","Data":"b905b7495ee6aff4e76f72c2570e32c22a703bc5d7f1b876b5749f162a83a67d"} Jan 27 22:01:31 crc kubenswrapper[4803]: I0127 22:01:31.600340 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-n4gqd"] Jan 27 22:01:31 crc kubenswrapper[4803]: I0127 22:01:31.601429 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-n4gqd" Jan 27 22:01:31 crc kubenswrapper[4803]: I0127 22:01:31.603600 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 27 22:01:31 crc kubenswrapper[4803]: I0127 22:01:31.604262 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 27 22:01:31 crc kubenswrapper[4803]: I0127 22:01:31.604347 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-5r8k2" Jan 27 22:01:31 crc kubenswrapper[4803]: I0127 22:01:31.620310 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-n4gqd"] Jan 27 22:01:31 crc kubenswrapper[4803]: I0127 22:01:31.674001 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gxkl\" (UniqueName: \"kubernetes.io/projected/98bff570-ae6c-423d-8a0b-0d2aed9e0853-kube-api-access-5gxkl\") pod \"cluster-logging-operator-79cf69ddc8-n4gqd\" (UID: \"98bff570-ae6c-423d-8a0b-0d2aed9e0853\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-n4gqd" Jan 27 22:01:31 crc kubenswrapper[4803]: I0127 22:01:31.777147 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gxkl\" (UniqueName: \"kubernetes.io/projected/98bff570-ae6c-423d-8a0b-0d2aed9e0853-kube-api-access-5gxkl\") pod \"cluster-logging-operator-79cf69ddc8-n4gqd\" (UID: \"98bff570-ae6c-423d-8a0b-0d2aed9e0853\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-n4gqd" Jan 27 22:01:31 crc kubenswrapper[4803]: I0127 22:01:31.796269 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gxkl\" (UniqueName: \"kubernetes.io/projected/98bff570-ae6c-423d-8a0b-0d2aed9e0853-kube-api-access-5gxkl\") pod \"cluster-logging-operator-79cf69ddc8-n4gqd\" (UID: \"98bff570-ae6c-423d-8a0b-0d2aed9e0853\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-n4gqd" Jan 27 22:01:31 crc kubenswrapper[4803]: I0127 22:01:31.918408 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-n4gqd" Jan 27 22:01:32 crc kubenswrapper[4803]: I0127 22:01:32.365791 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-n4gqd"] Jan 27 22:01:34 crc kubenswrapper[4803]: W0127 22:01:34.641504 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98bff570_ae6c_423d_8a0b_0d2aed9e0853.slice/crio-68f928ba2a1bae1fcf9515d5583eefdf624f8a78d8f6b8b432af0c1f05920f2a WatchSource:0}: Error finding container 68f928ba2a1bae1fcf9515d5583eefdf624f8a78d8f6b8b432af0c1f05920f2a: Status 404 returned error can't find the container with id 68f928ba2a1bae1fcf9515d5583eefdf624f8a78d8f6b8b432af0c1f05920f2a Jan 27 22:01:35 crc kubenswrapper[4803]: I0127 22:01:35.280641 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:35 crc kubenswrapper[4803]: I0127 22:01:35.321761 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:35 crc kubenswrapper[4803]: I0127 22:01:35.629203 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-n4gqd" event={"ID":"98bff570-ae6c-423d-8a0b-0d2aed9e0853","Type":"ContainerStarted","Data":"68f928ba2a1bae1fcf9515d5583eefdf624f8a78d8f6b8b432af0c1f05920f2a"} Jan 27 22:01:35 crc kubenswrapper[4803]: I0127 22:01:35.630722 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" event={"ID":"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d","Type":"ContainerStarted","Data":"eb68120625d152e4afd7445956dc72c4134e1d90bc71f6457a33ed62b7c72527"} Jan 27 22:01:37 crc kubenswrapper[4803]: I0127 22:01:37.863763 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4n94b"] Jan 27 22:01:37 crc kubenswrapper[4803]: I0127 22:01:37.864252 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4n94b" podUID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerName="registry-server" containerID="cri-o://7e859e6e1cec81e414106aa28b19de23b9f4d17877998ac58799c501912a99a5" gracePeriod=2 Jan 27 22:01:38 crc kubenswrapper[4803]: I0127 22:01:38.661880 4803 generic.go:334] "Generic (PLEG): container finished" podID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerID="7e859e6e1cec81e414106aa28b19de23b9f4d17877998ac58799c501912a99a5" exitCode=0 Jan 27 22:01:38 crc kubenswrapper[4803]: I0127 22:01:38.661929 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4n94b" event={"ID":"f7a33dc3-ce4c-445b-8d71-9d9c3860302a","Type":"ContainerDied","Data":"7e859e6e1cec81e414106aa28b19de23b9f4d17877998ac58799c501912a99a5"} Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.638625 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.693039 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4n94b" event={"ID":"f7a33dc3-ce4c-445b-8d71-9d9c3860302a","Type":"ContainerDied","Data":"1bdc8d57a4cd007cb0107ffc5f0fd3c06eaecdf51cdbb1ac15c44f7add2f03bc"} Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.693095 4803 scope.go:117] "RemoveContainer" containerID="7e859e6e1cec81e414106aa28b19de23b9f4d17877998ac58799c501912a99a5" Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.693220 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4n94b" Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.755086 4803 scope.go:117] "RemoveContainer" containerID="5b17dbbdf1db0199b595dbce831ba882628c725c7935129482bfb99921126699" Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.765105 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clgqz\" (UniqueName: \"kubernetes.io/projected/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-kube-api-access-clgqz\") pod \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.765158 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-utilities\") pod \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.765186 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-catalog-content\") pod \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\" (UID: \"f7a33dc3-ce4c-445b-8d71-9d9c3860302a\") " Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.767082 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-utilities" (OuterVolumeSpecName: "utilities") pod "f7a33dc3-ce4c-445b-8d71-9d9c3860302a" (UID: "f7a33dc3-ce4c-445b-8d71-9d9c3860302a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.779913 4803 scope.go:117] "RemoveContainer" containerID="7e201ac2cf7f589b7a93be72a6f20c95c1629204a74c17bf81cd6c3c41bc3917" Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.791207 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-kube-api-access-clgqz" (OuterVolumeSpecName: "kube-api-access-clgqz") pod "f7a33dc3-ce4c-445b-8d71-9d9c3860302a" (UID: "f7a33dc3-ce4c-445b-8d71-9d9c3860302a"). InnerVolumeSpecName "kube-api-access-clgqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.867307 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clgqz\" (UniqueName: \"kubernetes.io/projected/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-kube-api-access-clgqz\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.867807 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.908937 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7a33dc3-ce4c-445b-8d71-9d9c3860302a" (UID: "f7a33dc3-ce4c-445b-8d71-9d9c3860302a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:01:42 crc kubenswrapper[4803]: I0127 22:01:42.969711 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a33dc3-ce4c-445b-8d71-9d9c3860302a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:43 crc kubenswrapper[4803]: I0127 22:01:43.017874 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4n94b"] Jan 27 22:01:43 crc kubenswrapper[4803]: I0127 22:01:43.022642 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4n94b"] Jan 27 22:01:43 crc kubenswrapper[4803]: I0127 22:01:43.702541 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" event={"ID":"51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d","Type":"ContainerStarted","Data":"25a6330c83952a88847c0c11119bab07dd48438558722bf00eaf754e94430d5c"} Jan 27 22:01:43 crc kubenswrapper[4803]: I0127 22:01:43.703450 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:43 crc kubenswrapper[4803]: I0127 22:01:43.704391 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-n4gqd" event={"ID":"98bff570-ae6c-423d-8a0b-0d2aed9e0853","Type":"ContainerStarted","Data":"c140f593a27c630eadf3306424a84a90df93b9051c59b9a73f2a3d91c7a114bf"} Jan 27 22:01:43 crc kubenswrapper[4803]: I0127 22:01:43.705749 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 22:01:43 crc kubenswrapper[4803]: I0127 22:01:43.728163 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" podStartSLOduration=2.601416129 podStartE2EDuration="15.728142028s" podCreationTimestamp="2026-01-27 22:01:28 +0000 UTC" firstStartedPulling="2026-01-27 22:01:29.508423379 +0000 UTC m=+841.924445078" lastFinishedPulling="2026-01-27 22:01:42.635149278 +0000 UTC m=+855.051170977" observedRunningTime="2026-01-27 22:01:43.725692022 +0000 UTC m=+856.141713731" watchObservedRunningTime="2026-01-27 22:01:43.728142028 +0000 UTC m=+856.144163727" Jan 27 22:01:43 crc kubenswrapper[4803]: I0127 22:01:43.758778 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-n4gqd" podStartSLOduration=4.785880731 podStartE2EDuration="12.758759755s" podCreationTimestamp="2026-01-27 22:01:31 +0000 UTC" firstStartedPulling="2026-01-27 22:01:34.643379874 +0000 UTC m=+847.059401573" lastFinishedPulling="2026-01-27 22:01:42.616258908 +0000 UTC m=+855.032280597" observedRunningTime="2026-01-27 22:01:43.750736768 +0000 UTC m=+856.166758477" watchObservedRunningTime="2026-01-27 22:01:43.758759755 +0000 UTC m=+856.174781454" Jan 27 22:01:44 crc kubenswrapper[4803]: I0127 22:01:44.314706 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" path="/var/lib/kubelet/pods/f7a33dc3-ce4c-445b-8d71-9d9c3860302a/volumes" Jan 27 22:01:48 crc kubenswrapper[4803]: I0127 22:01:48.958479 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 27 22:01:48 crc kubenswrapper[4803]: E0127 22:01:48.959207 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerName="registry-server" Jan 27 22:01:48 crc kubenswrapper[4803]: I0127 22:01:48.959225 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerName="registry-server" Jan 27 22:01:48 crc kubenswrapper[4803]: E0127 22:01:48.959263 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerName="extract-content" Jan 27 22:01:48 crc kubenswrapper[4803]: I0127 22:01:48.959272 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerName="extract-content" Jan 27 22:01:48 crc kubenswrapper[4803]: E0127 22:01:48.959283 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerName="extract-utilities" Jan 27 22:01:48 crc kubenswrapper[4803]: I0127 22:01:48.959290 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerName="extract-utilities" Jan 27 22:01:48 crc kubenswrapper[4803]: I0127 22:01:48.959421 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a33dc3-ce4c-445b-8d71-9d9c3860302a" containerName="registry-server" Jan 27 22:01:48 crc kubenswrapper[4803]: I0127 22:01:48.959956 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 27 22:01:48 crc kubenswrapper[4803]: I0127 22:01:48.961368 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 27 22:01:48 crc kubenswrapper[4803]: I0127 22:01:48.965766 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 27 22:01:48 crc kubenswrapper[4803]: I0127 22:01:48.967439 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 27 22:01:49 crc kubenswrapper[4803]: I0127 22:01:49.048693 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdczh\" (UniqueName: \"kubernetes.io/projected/f4c00ee5-6cec-4491-96dc-26f87e5d441f-kube-api-access-tdczh\") pod \"minio\" (UID: \"f4c00ee5-6cec-4491-96dc-26f87e5d441f\") " pod="minio-dev/minio" Jan 27 22:01:49 crc kubenswrapper[4803]: I0127 22:01:49.048810 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4565bebb-c794-430b-b08e-74f4788ae606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4565bebb-c794-430b-b08e-74f4788ae606\") pod \"minio\" (UID: \"f4c00ee5-6cec-4491-96dc-26f87e5d441f\") " pod="minio-dev/minio" Jan 27 22:01:49 crc kubenswrapper[4803]: I0127 22:01:49.150099 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdczh\" (UniqueName: \"kubernetes.io/projected/f4c00ee5-6cec-4491-96dc-26f87e5d441f-kube-api-access-tdczh\") pod \"minio\" (UID: \"f4c00ee5-6cec-4491-96dc-26f87e5d441f\") " pod="minio-dev/minio" Jan 27 22:01:49 crc kubenswrapper[4803]: I0127 22:01:49.150188 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4565bebb-c794-430b-b08e-74f4788ae606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4565bebb-c794-430b-b08e-74f4788ae606\") pod \"minio\" (UID: \"f4c00ee5-6cec-4491-96dc-26f87e5d441f\") " pod="minio-dev/minio" Jan 27 22:01:49 crc kubenswrapper[4803]: I0127 22:01:49.153602 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:01:49 crc kubenswrapper[4803]: I0127 22:01:49.153635 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4565bebb-c794-430b-b08e-74f4788ae606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4565bebb-c794-430b-b08e-74f4788ae606\") pod \"minio\" (UID: \"f4c00ee5-6cec-4491-96dc-26f87e5d441f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/72c29063d96a715794c8476495ec18a596795615d97a9c8e96bc19c688184174/globalmount\"" pod="minio-dev/minio" Jan 27 22:01:49 crc kubenswrapper[4803]: I0127 22:01:49.173452 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdczh\" (UniqueName: \"kubernetes.io/projected/f4c00ee5-6cec-4491-96dc-26f87e5d441f-kube-api-access-tdczh\") pod \"minio\" (UID: \"f4c00ee5-6cec-4491-96dc-26f87e5d441f\") " pod="minio-dev/minio" Jan 27 22:01:49 crc kubenswrapper[4803]: I0127 22:01:49.173460 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4565bebb-c794-430b-b08e-74f4788ae606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4565bebb-c794-430b-b08e-74f4788ae606\") pod \"minio\" (UID: \"f4c00ee5-6cec-4491-96dc-26f87e5d441f\") " pod="minio-dev/minio" Jan 27 22:01:49 crc kubenswrapper[4803]: I0127 22:01:49.278283 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 27 22:01:49 crc kubenswrapper[4803]: I0127 22:01:49.466645 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 27 22:01:49 crc kubenswrapper[4803]: I0127 22:01:49.740267 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f4c00ee5-6cec-4491-96dc-26f87e5d441f","Type":"ContainerStarted","Data":"bd4915d5527eb12661aa1ee4fe0115196217eeb7898bc53a8a07f8674e22d636"} Jan 27 22:01:52 crc kubenswrapper[4803]: I0127 22:01:52.765218 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f4c00ee5-6cec-4491-96dc-26f87e5d441f","Type":"ContainerStarted","Data":"1f5f5bff98457c1cbcaa1d970eb0a90966c5b7b4906ecd05b7a604b60c42c1d6"} Jan 27 22:01:52 crc kubenswrapper[4803]: I0127 22:01:52.778149 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.942155765 podStartE2EDuration="6.778129074s" podCreationTimestamp="2026-01-27 22:01:46 +0000 UTC" firstStartedPulling="2026-01-27 22:01:49.482977467 +0000 UTC m=+861.898999166" lastFinishedPulling="2026-01-27 22:01:52.318950776 +0000 UTC m=+864.734972475" observedRunningTime="2026-01-27 22:01:52.777443556 +0000 UTC m=+865.193465255" watchObservedRunningTime="2026-01-27 22:01:52.778129074 +0000 UTC m=+865.194150773" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.219010 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw"] Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.220218 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.223252 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.226212 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.226363 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.226396 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.226516 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-dvg6c" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.228637 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw"] Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.286112 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/dea15eec-6442-4acb-b40a-418dddb46623-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.286163 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/dea15eec-6442-4acb-b40a-418dddb46623-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.286199 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea15eec-6442-4acb-b40a-418dddb46623-config\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.286330 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nknzp\" (UniqueName: \"kubernetes.io/projected/dea15eec-6442-4acb-b40a-418dddb46623-kube-api-access-nknzp\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.286370 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dea15eec-6442-4acb-b40a-418dddb46623-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.387980 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nknzp\" (UniqueName: \"kubernetes.io/projected/dea15eec-6442-4acb-b40a-418dddb46623-kube-api-access-nknzp\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.388042 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dea15eec-6442-4acb-b40a-418dddb46623-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.388086 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/dea15eec-6442-4acb-b40a-418dddb46623-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.388120 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/dea15eec-6442-4acb-b40a-418dddb46623-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.388153 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea15eec-6442-4acb-b40a-418dddb46623-config\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.389310 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dea15eec-6442-4acb-b40a-418dddb46623-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.390077 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea15eec-6442-4acb-b40a-418dddb46623-config\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.391918 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-q4xmw"] Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.393838 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/dea15eec-6442-4acb-b40a-418dddb46623-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.393878 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/dea15eec-6442-4acb-b40a-418dddb46623-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.394740 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.397738 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.398046 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.398196 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.409463 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nknzp\" (UniqueName: \"kubernetes.io/projected/dea15eec-6442-4acb-b40a-418dddb46623-kube-api-access-nknzp\") pod \"logging-loki-distributor-5f678c8dd6-zr5dw\" (UID: \"dea15eec-6442-4acb-b40a-418dddb46623\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.410168 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-q4xmw"] Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.451527 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm"] Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.457391 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.464442 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.464674 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.469497 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm"] Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.489985 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.490058 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654pl\" (UniqueName: \"kubernetes.io/projected/1e455314-8336-4d0e-a611-044952db08e7-kube-api-access-654pl\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.490086 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh8ll\" (UniqueName: \"kubernetes.io/projected/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-kube-api-access-rh8ll\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.490137 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e455314-8336-4d0e-a611-044952db08e7-config\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.490165 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-s3\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.490235 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.490313 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-config\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.490365 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.490393 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.490437 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.490465 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.539588 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.549659 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-8597d8df56-shvtm"] Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.550674 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.556721 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.557138 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.557196 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.557267 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.557301 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.573628 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-8597d8df56-dkqb6"] Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.574876 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.576522 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-xvhqt" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.577780 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8597d8df56-shvtm"] Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.590336 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8597d8df56-dkqb6"] Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591242 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-tls-secret\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591292 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-config\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591327 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591353 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591380 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591408 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591430 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-tenants\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591454 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591475 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654pl\" (UniqueName: \"kubernetes.io/projected/1e455314-8336-4d0e-a611-044952db08e7-kube-api-access-654pl\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591491 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh8ll\" (UniqueName: \"kubernetes.io/projected/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-kube-api-access-rh8ll\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591511 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591538 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e455314-8336-4d0e-a611-044952db08e7-config\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591563 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-s3\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591587 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7xdd\" (UniqueName: \"kubernetes.io/projected/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-kube-api-access-n7xdd\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591609 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-rbac\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591631 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591654 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591685 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-lokistack-gateway\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.591712 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.592700 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.593462 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-config\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.593618 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.593730 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e455314-8336-4d0e-a611-044952db08e7-config\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.596358 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.597968 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.598612 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.599824 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-s3\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.612395 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/1e455314-8336-4d0e-a611-044952db08e7-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.625280 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh8ll\" (UniqueName: \"kubernetes.io/projected/0323234b-6aa2-41ea-bf58-a4b3924d6e4a-kube-api-access-rh8ll\") pod \"logging-loki-query-frontend-69d9546745-bs4dm\" (UID: \"0323234b-6aa2-41ea-bf58-a4b3924d6e4a\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.625960 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654pl\" (UniqueName: \"kubernetes.io/projected/1e455314-8336-4d0e-a611-044952db08e7-kube-api-access-654pl\") pod \"logging-loki-querier-76788598db-q4xmw\" (UID: \"1e455314-8336-4d0e-a611-044952db08e7\") " pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694166 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-rbac\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694212 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694230 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694262 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694281 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-lokistack-gateway\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694323 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-lokistack-gateway\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694359 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-rbac\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694395 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2hx6\" (UniqueName: \"kubernetes.io/projected/806f03eb-fc44-4b50-953e-d4101abd8bc3-kube-api-access-c2hx6\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694445 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-tls-secret\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694504 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-tenants\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694544 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: E0127 22:01:58.694542 4803 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694592 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-tenants\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694645 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-tls-secret\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694672 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.694734 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: E0127 22:01:58.694794 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-tls-secret podName:bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b nodeName:}" failed. No retries permitted until 2026-01-27 22:01:59.194777621 +0000 UTC m=+871.610799320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-tls-secret") pod "logging-loki-gateway-8597d8df56-shvtm" (UID: "bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b") : secret "logging-loki-gateway-http" not found Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.695039 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7xdd\" (UniqueName: \"kubernetes.io/projected/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-kube-api-access-n7xdd\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.695211 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-lokistack-gateway\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.695043 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.695550 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.696355 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-rbac\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.697428 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.697667 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-tenants\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.711212 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7xdd\" (UniqueName: \"kubernetes.io/projected/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-kube-api-access-n7xdd\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.767568 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.785457 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.796606 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.796665 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-lokistack-gateway\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.796705 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-rbac\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.796728 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2hx6\" (UniqueName: \"kubernetes.io/projected/806f03eb-fc44-4b50-953e-d4101abd8bc3-kube-api-access-c2hx6\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.796801 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-tenants\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.796865 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.796897 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-tls-secret\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.796920 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.797942 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.798120 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.798984 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-lokistack-gateway\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: E0127 22:01:58.799065 4803 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 27 22:01:58 crc kubenswrapper[4803]: E0127 22:01:58.799118 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-tls-secret podName:806f03eb-fc44-4b50-953e-d4101abd8bc3 nodeName:}" failed. No retries permitted until 2026-01-27 22:01:59.299102848 +0000 UTC m=+871.715124547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-tls-secret") pod "logging-loki-gateway-8597d8df56-dkqb6" (UID: "806f03eb-fc44-4b50-953e-d4101abd8bc3") : secret "logging-loki-gateway-http" not found Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.799492 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/806f03eb-fc44-4b50-953e-d4101abd8bc3-rbac\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.805990 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-tenants\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.806540 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:58 crc kubenswrapper[4803]: I0127 22:01:58.817646 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2hx6\" (UniqueName: \"kubernetes.io/projected/806f03eb-fc44-4b50-953e-d4101abd8bc3-kube-api-access-c2hx6\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.069817 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw"] Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.098331 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" event={"ID":"dea15eec-6442-4acb-b40a-418dddb46623","Type":"ContainerStarted","Data":"09d52627cb484dbd2add9417466ad781145f57e27b508b35fe9ba13789706a44"} Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.201303 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-tls-secret\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.205007 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b-tls-secret\") pod \"logging-loki-gateway-8597d8df56-shvtm\" (UID: \"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:59 crc kubenswrapper[4803]: W0127 22:01:59.237199 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e455314_8336_4d0e_a611_044952db08e7.slice/crio-811261134d9a44b8e694eeff767790bbac93012fe65417efc33f2d5a9706c31c WatchSource:0}: Error finding container 811261134d9a44b8e694eeff767790bbac93012fe65417efc33f2d5a9706c31c: Status 404 returned error can't find the container with id 811261134d9a44b8e694eeff767790bbac93012fe65417efc33f2d5a9706c31c Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.239136 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-q4xmw"] Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.251891 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.302641 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-tls-secret\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.308396 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/806f03eb-fc44-4b50-953e-d4101abd8bc3-tls-secret\") pod \"logging-loki-gateway-8597d8df56-dkqb6\" (UID: \"806f03eb-fc44-4b50-953e-d4101abd8bc3\") " pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.363384 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm"] Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.381060 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.384067 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.392547 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.392722 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.395288 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.452360 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.457118 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.459549 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.459739 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.461955 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.505029 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.505714 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.505749 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.505822 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aaff93a3-7bb3-4ac2-99b5-1ccd9aa8fe4d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aaff93a3-7bb3-4ac2-99b5-1ccd9aa8fe4d\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.505968 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.506024 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7k9\" (UniqueName: \"kubernetes.io/projected/564d57a3-4f2a-46a9-928b-b77dc685d903-kube-api-access-ch7k9\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.506063 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/564d57a3-4f2a-46a9-928b-b77dc685d903-config\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.506108 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-23b80e46-11f2-4981-be58-ee3fb1d879db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b80e46-11f2-4981-be58-ee3fb1d879db\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.506124 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.506176 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.508195 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.508496 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.515254 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.561127 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608352 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c26ad1-a645-4746-9c19-c7bbda04000c-config\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608392 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aaff93a3-7bb3-4ac2-99b5-1ccd9aa8fe4d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aaff93a3-7bb3-4ac2-99b5-1ccd9aa8fe4d\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608409 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608425 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf7tn\" (UniqueName: \"kubernetes.io/projected/a4c26ad1-a645-4746-9c19-c7bbda04000c-kube-api-access-vf7tn\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608450 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c971192-3e56-4b4a-9539-ac14f1293968\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c971192-3e56-4b4a-9539-ac14f1293968\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608467 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608481 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608506 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-23b80e46-11f2-4981-be58-ee3fb1d879db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b80e46-11f2-4981-be58-ee3fb1d879db\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608529 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608545 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608562 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608602 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-963b82d8-6b99-4118-8e96-79d67cd54b1f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-963b82d8-6b99-4118-8e96-79d67cd54b1f\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608624 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6efa3b11-b2ea-4f6d-87d2-177229718026-config\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608641 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608657 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608671 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppkwb\" (UniqueName: \"kubernetes.io/projected/6efa3b11-b2ea-4f6d-87d2-177229718026-kube-api-access-ppkwb\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608699 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch7k9\" (UniqueName: \"kubernetes.io/projected/564d57a3-4f2a-46a9-928b-b77dc685d903-kube-api-access-ch7k9\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608714 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/564d57a3-4f2a-46a9-928b-b77dc685d903-config\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608733 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608757 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608776 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.608802 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.610006 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.610025 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/564d57a3-4f2a-46a9-928b-b77dc685d903-config\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.613349 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.613388 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aaff93a3-7bb3-4ac2-99b5-1ccd9aa8fe4d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aaff93a3-7bb3-4ac2-99b5-1ccd9aa8fe4d\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5cdd9d1bfccc1a9d1a2d5a176404fd6dd4c050d9ae87e9f72ed75c2fa9d19335/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.613447 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.613493 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-23b80e46-11f2-4981-be58-ee3fb1d879db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b80e46-11f2-4981-be58-ee3fb1d879db\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dac9bcbe8c69ce0d7f37ab386a568c1b29bea6480e07258dcaea379f9a1966b6/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.613795 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.613822 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.614411 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/564d57a3-4f2a-46a9-928b-b77dc685d903-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.629862 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch7k9\" (UniqueName: \"kubernetes.io/projected/564d57a3-4f2a-46a9-928b-b77dc685d903-kube-api-access-ch7k9\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.654477 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aaff93a3-7bb3-4ac2-99b5-1ccd9aa8fe4d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aaff93a3-7bb3-4ac2-99b5-1ccd9aa8fe4d\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.661646 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-23b80e46-11f2-4981-be58-ee3fb1d879db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b80e46-11f2-4981-be58-ee3fb1d879db\") pod \"logging-loki-ingester-0\" (UID: \"564d57a3-4f2a-46a9-928b-b77dc685d903\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.674820 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8597d8df56-shvtm"] Jan 27 22:01:59 crc kubenswrapper[4803]: W0127 22:01:59.687004 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7542cd_ef2e_454e_b2b7_f417dcb1ba9b.slice/crio-b0f3f5e55e165f71a7697660b97e649a4887ce67a05a9e6a315d183c5ff5cb24 WatchSource:0}: Error finding container b0f3f5e55e165f71a7697660b97e649a4887ce67a05a9e6a315d183c5ff5cb24: Status 404 returned error can't find the container with id b0f3f5e55e165f71a7697660b97e649a4887ce67a05a9e6a315d183c5ff5cb24 Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710277 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710322 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710354 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-963b82d8-6b99-4118-8e96-79d67cd54b1f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-963b82d8-6b99-4118-8e96-79d67cd54b1f\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710378 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6efa3b11-b2ea-4f6d-87d2-177229718026-config\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710397 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710415 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710431 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppkwb\" (UniqueName: \"kubernetes.io/projected/6efa3b11-b2ea-4f6d-87d2-177229718026-kube-api-access-ppkwb\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710469 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710513 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710540 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c26ad1-a645-4746-9c19-c7bbda04000c-config\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710560 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710576 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf7tn\" (UniqueName: \"kubernetes.io/projected/a4c26ad1-a645-4746-9c19-c7bbda04000c-kube-api-access-vf7tn\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710598 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c971192-3e56-4b4a-9539-ac14f1293968\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c971192-3e56-4b4a-9539-ac14f1293968\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.710615 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.711523 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6efa3b11-b2ea-4f6d-87d2-177229718026-config\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.712109 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.712705 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.713267 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c26ad1-a645-4746-9c19-c7bbda04000c-config\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.715554 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.715636 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.716486 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.717814 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.717926 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-963b82d8-6b99-4118-8e96-79d67cd54b1f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-963b82d8-6b99-4118-8e96-79d67cd54b1f\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/222c4f8af9c082819b668c53326ccdc087bd28e55514be57870609b522111b65/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.717944 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.718126 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c971192-3e56-4b4a-9539-ac14f1293968\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c971192-3e56-4b4a-9539-ac14f1293968\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/47bc3fdb359147db6d8447a5f7eed63c26f83555bc87f65814bcb88a77061f61/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.718889 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/a4c26ad1-a645-4746-9c19-c7bbda04000c-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.723338 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.725581 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6efa3b11-b2ea-4f6d-87d2-177229718026-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.728012 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppkwb\" (UniqueName: \"kubernetes.io/projected/6efa3b11-b2ea-4f6d-87d2-177229718026-kube-api-access-ppkwb\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.731664 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf7tn\" (UniqueName: \"kubernetes.io/projected/a4c26ad1-a645-4746-9c19-c7bbda04000c-kube-api-access-vf7tn\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.751014 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c971192-3e56-4b4a-9539-ac14f1293968\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c971192-3e56-4b4a-9539-ac14f1293968\") pod \"logging-loki-compactor-0\" (UID: \"a4c26ad1-a645-4746-9c19-c7bbda04000c\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.754095 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-963b82d8-6b99-4118-8e96-79d67cd54b1f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-963b82d8-6b99-4118-8e96-79d67cd54b1f\") pod \"logging-loki-index-gateway-0\" (UID: \"6efa3b11-b2ea-4f6d-87d2-177229718026\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.759049 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.770415 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:01:59 crc kubenswrapper[4803]: I0127 22:01:59.824894 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:02:00 crc kubenswrapper[4803]: I0127 22:02:00.006118 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8597d8df56-dkqb6"] Jan 27 22:02:00 crc kubenswrapper[4803]: W0127 22:02:00.012248 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod806f03eb_fc44_4b50_953e_d4101abd8bc3.slice/crio-7f931c0d2b6f292698c99d0df817af589298e764ac123e30de6b658864289bec WatchSource:0}: Error finding container 7f931c0d2b6f292698c99d0df817af589298e764ac123e30de6b658864289bec: Status 404 returned error can't find the container with id 7f931c0d2b6f292698c99d0df817af589298e764ac123e30de6b658864289bec Jan 27 22:02:00 crc kubenswrapper[4803]: I0127 22:02:00.112692 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" event={"ID":"0323234b-6aa2-41ea-bf58-a4b3924d6e4a","Type":"ContainerStarted","Data":"f1e5ca23f089d0312c8c803eae25e2efe4986a41d81c198ee70e6353bd705f1c"} Jan 27 22:02:00 crc kubenswrapper[4803]: I0127 22:02:00.114523 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" event={"ID":"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b","Type":"ContainerStarted","Data":"b0f3f5e55e165f71a7697660b97e649a4887ce67a05a9e6a315d183c5ff5cb24"} Jan 27 22:02:00 crc kubenswrapper[4803]: I0127 22:02:00.115322 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" event={"ID":"1e455314-8336-4d0e-a611-044952db08e7","Type":"ContainerStarted","Data":"811261134d9a44b8e694eeff767790bbac93012fe65417efc33f2d5a9706c31c"} Jan 27 22:02:00 crc kubenswrapper[4803]: I0127 22:02:00.116103 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" event={"ID":"806f03eb-fc44-4b50-953e-d4101abd8bc3","Type":"ContainerStarted","Data":"7f931c0d2b6f292698c99d0df817af589298e764ac123e30de6b658864289bec"} Jan 27 22:02:00 crc kubenswrapper[4803]: I0127 22:02:00.116675 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 22:02:00 crc kubenswrapper[4803]: W0127 22:02:00.123978 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod564d57a3_4f2a_46a9_928b_b77dc685d903.slice/crio-c180185cebcf4d27cf8694b2fbf00ac0eb9f8f1eb86ca4d76c7edda9dcd8e6e1 WatchSource:0}: Error finding container c180185cebcf4d27cf8694b2fbf00ac0eb9f8f1eb86ca4d76c7edda9dcd8e6e1: Status 404 returned error can't find the container with id c180185cebcf4d27cf8694b2fbf00ac0eb9f8f1eb86ca4d76c7edda9dcd8e6e1 Jan 27 22:02:00 crc kubenswrapper[4803]: I0127 22:02:00.164349 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 22:02:00 crc kubenswrapper[4803]: I0127 22:02:00.415795 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 22:02:01 crc kubenswrapper[4803]: I0127 22:02:01.123879 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"a4c26ad1-a645-4746-9c19-c7bbda04000c","Type":"ContainerStarted","Data":"806fda1a8ba5c52e2f86bd19546fbe450cbd057d43b5386f952b07c6949532b0"} Jan 27 22:02:01 crc kubenswrapper[4803]: I0127 22:02:01.125184 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"564d57a3-4f2a-46a9-928b-b77dc685d903","Type":"ContainerStarted","Data":"c180185cebcf4d27cf8694b2fbf00ac0eb9f8f1eb86ca4d76c7edda9dcd8e6e1"} Jan 27 22:02:01 crc kubenswrapper[4803]: I0127 22:02:01.126311 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"6efa3b11-b2ea-4f6d-87d2-177229718026","Type":"ContainerStarted","Data":"c1af3ea02dc756fc7c67ad649cf7527c8487798e9f01e1ad53c9af5763afb413"} Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.163669 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" event={"ID":"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b","Type":"ContainerStarted","Data":"e6750f7e6f5bad3177da191825c97af57dba8856e4b8b638b5108e9c71b3cdcb"} Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.165727 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" event={"ID":"1e455314-8336-4d0e-a611-044952db08e7","Type":"ContainerStarted","Data":"24cbf390bbb40ff8ad92d2e0dc4353eb1929524d9aa47688ea02cbcc9ada2b5c"} Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.166794 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.168722 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" event={"ID":"806f03eb-fc44-4b50-953e-d4101abd8bc3","Type":"ContainerStarted","Data":"8350ac4f3750979e8864321f0f04da4b2e95706fa3d7ed852586740280ec96b6"} Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.170128 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" event={"ID":"dea15eec-6442-4acb-b40a-418dddb46623","Type":"ContainerStarted","Data":"f03283caaaa9768f4f37d4d76c098b2f3c76d6415dab822e90df13208e41c980"} Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.170622 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.172651 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"564d57a3-4f2a-46a9-928b-b77dc685d903","Type":"ContainerStarted","Data":"dee8ccb36ab2757ac640abde2f7719eb7c876de9477a2d7a16008efba24909cb"} Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.173067 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.174094 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" event={"ID":"0323234b-6aa2-41ea-bf58-a4b3924d6e4a","Type":"ContainerStarted","Data":"83545be5d9acb8e165723d3646bcf11ff198269b8bf486501bc7960865d257c2"} Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.174454 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.175478 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"a4c26ad1-a645-4746-9c19-c7bbda04000c","Type":"ContainerStarted","Data":"2027371ad0e76ed10ceff2b40043778061ac90b5e5f3bb71970dabd431779dcb"} Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.175969 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.177007 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"6efa3b11-b2ea-4f6d-87d2-177229718026","Type":"ContainerStarted","Data":"3962c682e0dada503cb03c70714b5ba994f68f38d4011f3c63690f6b195fab5a"} Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.177371 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.184455 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" podStartSLOduration=2.058734298 podStartE2EDuration="6.184437079s" podCreationTimestamp="2026-01-27 22:01:58 +0000 UTC" firstStartedPulling="2026-01-27 22:01:59.240173587 +0000 UTC m=+871.656195286" lastFinishedPulling="2026-01-27 22:02:03.365876368 +0000 UTC m=+875.781898067" observedRunningTime="2026-01-27 22:02:04.182134617 +0000 UTC m=+876.598156326" watchObservedRunningTime="2026-01-27 22:02:04.184437079 +0000 UTC m=+876.600458778" Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.218315 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.959186529 podStartE2EDuration="6.218295903s" podCreationTimestamp="2026-01-27 22:01:58 +0000 UTC" firstStartedPulling="2026-01-27 22:02:00.128512581 +0000 UTC m=+872.544534280" lastFinishedPulling="2026-01-27 22:02:03.387621955 +0000 UTC m=+875.803643654" observedRunningTime="2026-01-27 22:02:04.213508664 +0000 UTC m=+876.629530383" watchObservedRunningTime="2026-01-27 22:02:04.218295903 +0000 UTC m=+876.634317622" Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.239197 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" podStartSLOduration=2.311173564 podStartE2EDuration="6.239179797s" podCreationTimestamp="2026-01-27 22:01:58 +0000 UTC" firstStartedPulling="2026-01-27 22:01:59.382182091 +0000 UTC m=+871.798203790" lastFinishedPulling="2026-01-27 22:02:03.310188314 +0000 UTC m=+875.726210023" observedRunningTime="2026-01-27 22:02:04.236824483 +0000 UTC m=+876.652846182" watchObservedRunningTime="2026-01-27 22:02:04.239179797 +0000 UTC m=+876.655201496" Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.254501 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.276528217 podStartE2EDuration="6.254483411s" podCreationTimestamp="2026-01-27 22:01:58 +0000 UTC" firstStartedPulling="2026-01-27 22:02:00.423607038 +0000 UTC m=+872.839628737" lastFinishedPulling="2026-01-27 22:02:03.401562192 +0000 UTC m=+875.817583931" observedRunningTime="2026-01-27 22:02:04.253312738 +0000 UTC m=+876.669334447" watchObservedRunningTime="2026-01-27 22:02:04.254483411 +0000 UTC m=+876.670505130" Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.284469 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.072688683 podStartE2EDuration="6.284448139s" podCreationTimestamp="2026-01-27 22:01:58 +0000 UTC" firstStartedPulling="2026-01-27 22:02:00.17587715 +0000 UTC m=+872.591898849" lastFinishedPulling="2026-01-27 22:02:03.387636556 +0000 UTC m=+875.803658305" observedRunningTime="2026-01-27 22:02:04.274147351 +0000 UTC m=+876.690169050" watchObservedRunningTime="2026-01-27 22:02:04.284448139 +0000 UTC m=+876.700469848" Jan 27 22:02:04 crc kubenswrapper[4803]: I0127 22:02:04.296904 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" podStartSLOduration=1.999090027 podStartE2EDuration="6.296885415s" podCreationTimestamp="2026-01-27 22:01:58 +0000 UTC" firstStartedPulling="2026-01-27 22:01:59.07844008 +0000 UTC m=+871.494461779" lastFinishedPulling="2026-01-27 22:02:03.376235478 +0000 UTC m=+875.792257167" observedRunningTime="2026-01-27 22:02:04.293768961 +0000 UTC m=+876.709790660" watchObservedRunningTime="2026-01-27 22:02:04.296885415 +0000 UTC m=+876.712907114" Jan 27 22:02:06 crc kubenswrapper[4803]: I0127 22:02:06.218312 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" event={"ID":"bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b","Type":"ContainerStarted","Data":"54baf949bf8c73e3ac1142c3fa5d2cfc26e04db840bf6607cf4ead84a006cd9c"} Jan 27 22:02:06 crc kubenswrapper[4803]: I0127 22:02:06.218982 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:02:06 crc kubenswrapper[4803]: I0127 22:02:06.219001 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:02:06 crc kubenswrapper[4803]: I0127 22:02:06.221558 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" event={"ID":"806f03eb-fc44-4b50-953e-d4101abd8bc3","Type":"ContainerStarted","Data":"b586840254f5a3e7c8d47cfa8f9fae62ee79518097e4de640706e95b1f8c2707"} Jan 27 22:02:06 crc kubenswrapper[4803]: I0127 22:02:06.230717 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:02:06 crc kubenswrapper[4803]: I0127 22:02:06.237658 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" Jan 27 22:02:06 crc kubenswrapper[4803]: I0127 22:02:06.245193 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podStartSLOduration=2.024280488 podStartE2EDuration="8.245170658s" podCreationTimestamp="2026-01-27 22:01:58 +0000 UTC" firstStartedPulling="2026-01-27 22:01:59.693730263 +0000 UTC m=+872.109751962" lastFinishedPulling="2026-01-27 22:02:05.914620423 +0000 UTC m=+878.330642132" observedRunningTime="2026-01-27 22:02:06.239290989 +0000 UTC m=+878.655312698" watchObservedRunningTime="2026-01-27 22:02:06.245170658 +0000 UTC m=+878.661192357" Jan 27 22:02:06 crc kubenswrapper[4803]: I0127 22:02:06.267352 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podStartSLOduration=2.376578599 podStartE2EDuration="8.267330626s" podCreationTimestamp="2026-01-27 22:01:58 +0000 UTC" firstStartedPulling="2026-01-27 22:02:00.01848634 +0000 UTC m=+872.434508039" lastFinishedPulling="2026-01-27 22:02:05.909238367 +0000 UTC m=+878.325260066" observedRunningTime="2026-01-27 22:02:06.262413353 +0000 UTC m=+878.678435062" watchObservedRunningTime="2026-01-27 22:02:06.267330626 +0000 UTC m=+878.683352325" Jan 27 22:02:07 crc kubenswrapper[4803]: I0127 22:02:07.229417 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:02:07 crc kubenswrapper[4803]: I0127 22:02:07.229449 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:02:07 crc kubenswrapper[4803]: I0127 22:02:07.237997 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:02:07 crc kubenswrapper[4803]: I0127 22:02:07.246214 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" Jan 27 22:02:16 crc kubenswrapper[4803]: I0127 22:02:16.343282 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:02:16 crc kubenswrapper[4803]: I0127 22:02:16.343977 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:02:18 crc kubenswrapper[4803]: I0127 22:02:18.547518 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 22:02:18 crc kubenswrapper[4803]: I0127 22:02:18.807236 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 22:02:18 crc kubenswrapper[4803]: I0127 22:02:18.824286 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 22:02:19 crc kubenswrapper[4803]: I0127 22:02:19.764538 4803 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 27 22:02:19 crc kubenswrapper[4803]: I0127 22:02:19.764977 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="564d57a3-4f2a-46a9-928b-b77dc685d903" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 22:02:19 crc kubenswrapper[4803]: I0127 22:02:19.776766 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 27 22:02:19 crc kubenswrapper[4803]: I0127 22:02:19.833426 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 22:02:29 crc kubenswrapper[4803]: I0127 22:02:29.765583 4803 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 27 22:02:29 crc kubenswrapper[4803]: I0127 22:02:29.766196 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="564d57a3-4f2a-46a9-928b-b77dc685d903" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 22:02:39 crc kubenswrapper[4803]: I0127 22:02:39.764002 4803 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 27 22:02:39 crc kubenswrapper[4803]: I0127 22:02:39.765752 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="564d57a3-4f2a-46a9-928b-b77dc685d903" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.277600 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-68lsg"] Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.279366 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.313711 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-68lsg"] Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.374092 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-utilities\") pod \"community-operators-68lsg\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.374244 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqz2g\" (UniqueName: \"kubernetes.io/projected/eb01363d-5787-4752-8a19-b43ffab76e46-kube-api-access-xqz2g\") pod \"community-operators-68lsg\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.374313 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-catalog-content\") pod \"community-operators-68lsg\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.475813 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqz2g\" (UniqueName: \"kubernetes.io/projected/eb01363d-5787-4752-8a19-b43ffab76e46-kube-api-access-xqz2g\") pod \"community-operators-68lsg\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.475913 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-catalog-content\") pod \"community-operators-68lsg\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.475976 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-utilities\") pod \"community-operators-68lsg\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.476590 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-catalog-content\") pod \"community-operators-68lsg\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.476663 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-utilities\") pod \"community-operators-68lsg\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.500382 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqz2g\" (UniqueName: \"kubernetes.io/projected/eb01363d-5787-4752-8a19-b43ffab76e46-kube-api-access-xqz2g\") pod \"community-operators-68lsg\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:41 crc kubenswrapper[4803]: I0127 22:02:41.609629 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:42 crc kubenswrapper[4803]: I0127 22:02:42.081782 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-68lsg"] Jan 27 22:02:42 crc kubenswrapper[4803]: W0127 22:02:42.084531 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb01363d_5787_4752_8a19_b43ffab76e46.slice/crio-12293221f608e248ce3255b0db7c2624d0c5af2cc9f5d7340bcbb500bfca92ea WatchSource:0}: Error finding container 12293221f608e248ce3255b0db7c2624d0c5af2cc9f5d7340bcbb500bfca92ea: Status 404 returned error can't find the container with id 12293221f608e248ce3255b0db7c2624d0c5af2cc9f5d7340bcbb500bfca92ea Jan 27 22:02:42 crc kubenswrapper[4803]: I0127 22:02:42.468727 4803 generic.go:334] "Generic (PLEG): container finished" podID="eb01363d-5787-4752-8a19-b43ffab76e46" containerID="61824c75db9a7cd45ec1d54c559644a59f62825b70ff4cdd7476ba0b9076f6ab" exitCode=0 Jan 27 22:02:42 crc kubenswrapper[4803]: I0127 22:02:42.468785 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68lsg" event={"ID":"eb01363d-5787-4752-8a19-b43ffab76e46","Type":"ContainerDied","Data":"61824c75db9a7cd45ec1d54c559644a59f62825b70ff4cdd7476ba0b9076f6ab"} Jan 27 22:02:42 crc kubenswrapper[4803]: I0127 22:02:42.469111 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68lsg" event={"ID":"eb01363d-5787-4752-8a19-b43ffab76e46","Type":"ContainerStarted","Data":"12293221f608e248ce3255b0db7c2624d0c5af2cc9f5d7340bcbb500bfca92ea"} Jan 27 22:02:43 crc kubenswrapper[4803]: I0127 22:02:43.477000 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68lsg" event={"ID":"eb01363d-5787-4752-8a19-b43ffab76e46","Type":"ContainerStarted","Data":"6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872"} Jan 27 22:02:44 crc kubenswrapper[4803]: I0127 22:02:44.488295 4803 generic.go:334] "Generic (PLEG): container finished" podID="eb01363d-5787-4752-8a19-b43ffab76e46" containerID="6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872" exitCode=0 Jan 27 22:02:44 crc kubenswrapper[4803]: I0127 22:02:44.488355 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68lsg" event={"ID":"eb01363d-5787-4752-8a19-b43ffab76e46","Type":"ContainerDied","Data":"6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872"} Jan 27 22:02:45 crc kubenswrapper[4803]: I0127 22:02:45.502506 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68lsg" event={"ID":"eb01363d-5787-4752-8a19-b43ffab76e46","Type":"ContainerStarted","Data":"136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde"} Jan 27 22:02:45 crc kubenswrapper[4803]: I0127 22:02:45.523869 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-68lsg" podStartSLOduration=2.037736137 podStartE2EDuration="4.523840371s" podCreationTimestamp="2026-01-27 22:02:41 +0000 UTC" firstStartedPulling="2026-01-27 22:02:42.470007479 +0000 UTC m=+914.886029188" lastFinishedPulling="2026-01-27 22:02:44.956111703 +0000 UTC m=+917.372133422" observedRunningTime="2026-01-27 22:02:45.522549176 +0000 UTC m=+917.938570875" watchObservedRunningTime="2026-01-27 22:02:45.523840371 +0000 UTC m=+917.939862070" Jan 27 22:02:46 crc kubenswrapper[4803]: I0127 22:02:46.343826 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:02:46 crc kubenswrapper[4803]: I0127 22:02:46.343908 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:02:49 crc kubenswrapper[4803]: I0127 22:02:49.763992 4803 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 27 22:02:49 crc kubenswrapper[4803]: I0127 22:02:49.764431 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="564d57a3-4f2a-46a9-928b-b77dc685d903" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.188736 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m98v2"] Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.190889 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.219546 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m98v2"] Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.325807 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfwct\" (UniqueName: \"kubernetes.io/projected/369c4fb3-7844-4249-99c0-73efc457eaea-kube-api-access-jfwct\") pod \"certified-operators-m98v2\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.325944 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-utilities\") pod \"certified-operators-m98v2\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.326044 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-catalog-content\") pod \"certified-operators-m98v2\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.435559 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfwct\" (UniqueName: \"kubernetes.io/projected/369c4fb3-7844-4249-99c0-73efc457eaea-kube-api-access-jfwct\") pod \"certified-operators-m98v2\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.435677 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-utilities\") pod \"certified-operators-m98v2\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.436839 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-catalog-content\") pod \"certified-operators-m98v2\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.438951 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-utilities\") pod \"certified-operators-m98v2\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.439723 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-catalog-content\") pod \"certified-operators-m98v2\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.467169 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfwct\" (UniqueName: \"kubernetes.io/projected/369c4fb3-7844-4249-99c0-73efc457eaea-kube-api-access-jfwct\") pod \"certified-operators-m98v2\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.513705 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.609855 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.610177 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.696584 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:51 crc kubenswrapper[4803]: I0127 22:02:51.825163 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m98v2"] Jan 27 22:02:52 crc kubenswrapper[4803]: I0127 22:02:52.561903 4803 generic.go:334] "Generic (PLEG): container finished" podID="369c4fb3-7844-4249-99c0-73efc457eaea" containerID="295e3fa8b5191f9d309b5e6669544f69f6a35b0388621ac5d00648464e960241" exitCode=0 Jan 27 22:02:52 crc kubenswrapper[4803]: I0127 22:02:52.561957 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m98v2" event={"ID":"369c4fb3-7844-4249-99c0-73efc457eaea","Type":"ContainerDied","Data":"295e3fa8b5191f9d309b5e6669544f69f6a35b0388621ac5d00648464e960241"} Jan 27 22:02:52 crc kubenswrapper[4803]: I0127 22:02:52.562361 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m98v2" event={"ID":"369c4fb3-7844-4249-99c0-73efc457eaea","Type":"ContainerStarted","Data":"e496f3c337766f9a144caca55d6b7311aace19d665be01dbf2b84af466272ef4"} Jan 27 22:02:52 crc kubenswrapper[4803]: I0127 22:02:52.608334 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:53 crc kubenswrapper[4803]: I0127 22:02:53.569468 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m98v2" event={"ID":"369c4fb3-7844-4249-99c0-73efc457eaea","Type":"ContainerStarted","Data":"7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b"} Jan 27 22:02:53 crc kubenswrapper[4803]: I0127 22:02:53.964084 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-68lsg"] Jan 27 22:02:54 crc kubenswrapper[4803]: I0127 22:02:54.577800 4803 generic.go:334] "Generic (PLEG): container finished" podID="369c4fb3-7844-4249-99c0-73efc457eaea" containerID="7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b" exitCode=0 Jan 27 22:02:54 crc kubenswrapper[4803]: I0127 22:02:54.577901 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m98v2" event={"ID":"369c4fb3-7844-4249-99c0-73efc457eaea","Type":"ContainerDied","Data":"7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b"} Jan 27 22:02:54 crc kubenswrapper[4803]: I0127 22:02:54.578285 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-68lsg" podUID="eb01363d-5787-4752-8a19-b43ffab76e46" containerName="registry-server" containerID="cri-o://136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde" gracePeriod=2 Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.153243 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.321484 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-catalog-content\") pod \"eb01363d-5787-4752-8a19-b43ffab76e46\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.322015 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqz2g\" (UniqueName: \"kubernetes.io/projected/eb01363d-5787-4752-8a19-b43ffab76e46-kube-api-access-xqz2g\") pod \"eb01363d-5787-4752-8a19-b43ffab76e46\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.322073 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-utilities\") pod \"eb01363d-5787-4752-8a19-b43ffab76e46\" (UID: \"eb01363d-5787-4752-8a19-b43ffab76e46\") " Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.322786 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-utilities" (OuterVolumeSpecName: "utilities") pod "eb01363d-5787-4752-8a19-b43ffab76e46" (UID: "eb01363d-5787-4752-8a19-b43ffab76e46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.323174 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.333213 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb01363d-5787-4752-8a19-b43ffab76e46-kube-api-access-xqz2g" (OuterVolumeSpecName: "kube-api-access-xqz2g") pod "eb01363d-5787-4752-8a19-b43ffab76e46" (UID: "eb01363d-5787-4752-8a19-b43ffab76e46"). InnerVolumeSpecName "kube-api-access-xqz2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.389439 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb01363d-5787-4752-8a19-b43ffab76e46" (UID: "eb01363d-5787-4752-8a19-b43ffab76e46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.424269 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb01363d-5787-4752-8a19-b43ffab76e46-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.424304 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqz2g\" (UniqueName: \"kubernetes.io/projected/eb01363d-5787-4752-8a19-b43ffab76e46-kube-api-access-xqz2g\") on node \"crc\" DevicePath \"\"" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.585909 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m98v2" event={"ID":"369c4fb3-7844-4249-99c0-73efc457eaea","Type":"ContainerStarted","Data":"fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0"} Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.587648 4803 generic.go:334] "Generic (PLEG): container finished" podID="eb01363d-5787-4752-8a19-b43ffab76e46" containerID="136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde" exitCode=0 Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.587689 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68lsg" event={"ID":"eb01363d-5787-4752-8a19-b43ffab76e46","Type":"ContainerDied","Data":"136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde"} Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.587723 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68lsg" event={"ID":"eb01363d-5787-4752-8a19-b43ffab76e46","Type":"ContainerDied","Data":"12293221f608e248ce3255b0db7c2624d0c5af2cc9f5d7340bcbb500bfca92ea"} Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.587743 4803 scope.go:117] "RemoveContainer" containerID="136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.587761 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68lsg" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.609497 4803 scope.go:117] "RemoveContainer" containerID="6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.610713 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m98v2" podStartSLOduration=2.127861296 podStartE2EDuration="4.610696751s" podCreationTimestamp="2026-01-27 22:02:51 +0000 UTC" firstStartedPulling="2026-01-27 22:02:52.563674783 +0000 UTC m=+924.979696492" lastFinishedPulling="2026-01-27 22:02:55.046510248 +0000 UTC m=+927.462531947" observedRunningTime="2026-01-27 22:02:55.606510427 +0000 UTC m=+928.022532126" watchObservedRunningTime="2026-01-27 22:02:55.610696751 +0000 UTC m=+928.026718440" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.633074 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-68lsg"] Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.634746 4803 scope.go:117] "RemoveContainer" containerID="61824c75db9a7cd45ec1d54c559644a59f62825b70ff4cdd7476ba0b9076f6ab" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.638292 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-68lsg"] Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.649118 4803 scope.go:117] "RemoveContainer" containerID="136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde" Jan 27 22:02:55 crc kubenswrapper[4803]: E0127 22:02:55.649484 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde\": container with ID starting with 136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde not found: ID does not exist" containerID="136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.649516 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde"} err="failed to get container status \"136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde\": rpc error: code = NotFound desc = could not find container \"136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde\": container with ID starting with 136ea72442a73dbd2e16325e57483d3aab68e80cc592975c80e7c17e84c5abde not found: ID does not exist" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.649536 4803 scope.go:117] "RemoveContainer" containerID="6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872" Jan 27 22:02:55 crc kubenswrapper[4803]: E0127 22:02:55.649753 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872\": container with ID starting with 6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872 not found: ID does not exist" containerID="6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.649771 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872"} err="failed to get container status \"6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872\": rpc error: code = NotFound desc = could not find container \"6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872\": container with ID starting with 6aa32e6c701213a11a09fe40e50b8db220dad222355e64fdc0c4cfeed15a3872 not found: ID does not exist" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.649782 4803 scope.go:117] "RemoveContainer" containerID="61824c75db9a7cd45ec1d54c559644a59f62825b70ff4cdd7476ba0b9076f6ab" Jan 27 22:02:55 crc kubenswrapper[4803]: E0127 22:02:55.649983 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61824c75db9a7cd45ec1d54c559644a59f62825b70ff4cdd7476ba0b9076f6ab\": container with ID starting with 61824c75db9a7cd45ec1d54c559644a59f62825b70ff4cdd7476ba0b9076f6ab not found: ID does not exist" containerID="61824c75db9a7cd45ec1d54c559644a59f62825b70ff4cdd7476ba0b9076f6ab" Jan 27 22:02:55 crc kubenswrapper[4803]: I0127 22:02:55.650002 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61824c75db9a7cd45ec1d54c559644a59f62825b70ff4cdd7476ba0b9076f6ab"} err="failed to get container status \"61824c75db9a7cd45ec1d54c559644a59f62825b70ff4cdd7476ba0b9076f6ab\": rpc error: code = NotFound desc = could not find container \"61824c75db9a7cd45ec1d54c559644a59f62825b70ff4cdd7476ba0b9076f6ab\": container with ID starting with 61824c75db9a7cd45ec1d54c559644a59f62825b70ff4cdd7476ba0b9076f6ab not found: ID does not exist" Jan 27 22:02:56 crc kubenswrapper[4803]: I0127 22:02:56.319245 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb01363d-5787-4752-8a19-b43ffab76e46" path="/var/lib/kubelet/pods/eb01363d-5787-4752-8a19-b43ffab76e46/volumes" Jan 27 22:02:59 crc kubenswrapper[4803]: I0127 22:02:59.765251 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 27 22:03:01 crc kubenswrapper[4803]: I0127 22:03:01.514330 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:03:01 crc kubenswrapper[4803]: I0127 22:03:01.514413 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:03:01 crc kubenswrapper[4803]: I0127 22:03:01.572780 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:03:01 crc kubenswrapper[4803]: I0127 22:03:01.699972 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:03:01 crc kubenswrapper[4803]: I0127 22:03:01.807181 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m98v2"] Jan 27 22:03:03 crc kubenswrapper[4803]: I0127 22:03:03.662950 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m98v2" podUID="369c4fb3-7844-4249-99c0-73efc457eaea" containerName="registry-server" containerID="cri-o://fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0" gracePeriod=2 Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.057729 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.154262 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-catalog-content\") pod \"369c4fb3-7844-4249-99c0-73efc457eaea\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.154427 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfwct\" (UniqueName: \"kubernetes.io/projected/369c4fb3-7844-4249-99c0-73efc457eaea-kube-api-access-jfwct\") pod \"369c4fb3-7844-4249-99c0-73efc457eaea\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.154530 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-utilities\") pod \"369c4fb3-7844-4249-99c0-73efc457eaea\" (UID: \"369c4fb3-7844-4249-99c0-73efc457eaea\") " Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.155828 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-utilities" (OuterVolumeSpecName: "utilities") pod "369c4fb3-7844-4249-99c0-73efc457eaea" (UID: "369c4fb3-7844-4249-99c0-73efc457eaea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.165116 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/369c4fb3-7844-4249-99c0-73efc457eaea-kube-api-access-jfwct" (OuterVolumeSpecName: "kube-api-access-jfwct") pod "369c4fb3-7844-4249-99c0-73efc457eaea" (UID: "369c4fb3-7844-4249-99c0-73efc457eaea"). InnerVolumeSpecName "kube-api-access-jfwct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.208041 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "369c4fb3-7844-4249-99c0-73efc457eaea" (UID: "369c4fb3-7844-4249-99c0-73efc457eaea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.255945 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfwct\" (UniqueName: \"kubernetes.io/projected/369c4fb3-7844-4249-99c0-73efc457eaea-kube-api-access-jfwct\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.255985 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.255996 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/369c4fb3-7844-4249-99c0-73efc457eaea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.670903 4803 generic.go:334] "Generic (PLEG): container finished" podID="369c4fb3-7844-4249-99c0-73efc457eaea" containerID="fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0" exitCode=0 Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.670948 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m98v2" event={"ID":"369c4fb3-7844-4249-99c0-73efc457eaea","Type":"ContainerDied","Data":"fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0"} Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.670979 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m98v2" event={"ID":"369c4fb3-7844-4249-99c0-73efc457eaea","Type":"ContainerDied","Data":"e496f3c337766f9a144caca55d6b7311aace19d665be01dbf2b84af466272ef4"} Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.670995 4803 scope.go:117] "RemoveContainer" containerID="fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.671057 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m98v2" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.694053 4803 scope.go:117] "RemoveContainer" containerID="7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.694625 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m98v2"] Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.700823 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m98v2"] Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.714637 4803 scope.go:117] "RemoveContainer" containerID="295e3fa8b5191f9d309b5e6669544f69f6a35b0388621ac5d00648464e960241" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.737646 4803 scope.go:117] "RemoveContainer" containerID="fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0" Jan 27 22:03:04 crc kubenswrapper[4803]: E0127 22:03:04.738136 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0\": container with ID starting with fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0 not found: ID does not exist" containerID="fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.738170 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0"} err="failed to get container status \"fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0\": rpc error: code = NotFound desc = could not find container \"fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0\": container with ID starting with fb78eed9038c3672fd39e485dfe6a3d0ed7037f49f7bd617a06e185f4da9dbd0 not found: ID does not exist" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.738201 4803 scope.go:117] "RemoveContainer" containerID="7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b" Jan 27 22:03:04 crc kubenswrapper[4803]: E0127 22:03:04.739093 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b\": container with ID starting with 7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b not found: ID does not exist" containerID="7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.739152 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b"} err="failed to get container status \"7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b\": rpc error: code = NotFound desc = could not find container \"7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b\": container with ID starting with 7d434b3f26c076f03736eb05d2bd88c6d1a9cc581c66f61c9f0777f6e94f386b not found: ID does not exist" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.739187 4803 scope.go:117] "RemoveContainer" containerID="295e3fa8b5191f9d309b5e6669544f69f6a35b0388621ac5d00648464e960241" Jan 27 22:03:04 crc kubenswrapper[4803]: E0127 22:03:04.739526 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"295e3fa8b5191f9d309b5e6669544f69f6a35b0388621ac5d00648464e960241\": container with ID starting with 295e3fa8b5191f9d309b5e6669544f69f6a35b0388621ac5d00648464e960241 not found: ID does not exist" containerID="295e3fa8b5191f9d309b5e6669544f69f6a35b0388621ac5d00648464e960241" Jan 27 22:03:04 crc kubenswrapper[4803]: I0127 22:03:04.739563 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"295e3fa8b5191f9d309b5e6669544f69f6a35b0388621ac5d00648464e960241"} err="failed to get container status \"295e3fa8b5191f9d309b5e6669544f69f6a35b0388621ac5d00648464e960241\": rpc error: code = NotFound desc = could not find container \"295e3fa8b5191f9d309b5e6669544f69f6a35b0388621ac5d00648464e960241\": container with ID starting with 295e3fa8b5191f9d309b5e6669544f69f6a35b0388621ac5d00648464e960241 not found: ID does not exist" Jan 27 22:03:06 crc kubenswrapper[4803]: I0127 22:03:06.315826 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="369c4fb3-7844-4249-99c0-73efc457eaea" path="/var/lib/kubelet/pods/369c4fb3-7844-4249-99c0-73efc457eaea/volumes" Jan 27 22:03:16 crc kubenswrapper[4803]: I0127 22:03:16.343571 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:03:16 crc kubenswrapper[4803]: I0127 22:03:16.344151 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:03:16 crc kubenswrapper[4803]: I0127 22:03:16.344196 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:03:16 crc kubenswrapper[4803]: I0127 22:03:16.344835 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9f834f520954d1f715c48108c608cf768b5ff78d5b3a0ccfc176c140c448267"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:03:16 crc kubenswrapper[4803]: I0127 22:03:16.344906 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://b9f834f520954d1f715c48108c608cf768b5ff78d5b3a0ccfc176c140c448267" gracePeriod=600 Jan 27 22:03:16 crc kubenswrapper[4803]: I0127 22:03:16.768695 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="b9f834f520954d1f715c48108c608cf768b5ff78d5b3a0ccfc176c140c448267" exitCode=0 Jan 27 22:03:16 crc kubenswrapper[4803]: I0127 22:03:16.768745 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"b9f834f520954d1f715c48108c608cf768b5ff78d5b3a0ccfc176c140c448267"} Jan 27 22:03:16 crc kubenswrapper[4803]: I0127 22:03:16.769065 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"a5bed6f52f57219858cf339986b99dcfe79ad6cdcbe8912b0cb981f2d60d0415"} Jan 27 22:03:16 crc kubenswrapper[4803]: I0127 22:03:16.769088 4803 scope.go:117] "RemoveContainer" containerID="95521df131317a8fb1bb4697014746e375ef67b38dfab0db8cdee522c9087edc" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.766980 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-wwn5l"] Jan 27 22:03:17 crc kubenswrapper[4803]: E0127 22:03:17.767665 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="369c4fb3-7844-4249-99c0-73efc457eaea" containerName="extract-content" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.767687 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="369c4fb3-7844-4249-99c0-73efc457eaea" containerName="extract-content" Jan 27 22:03:17 crc kubenswrapper[4803]: E0127 22:03:17.767709 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="369c4fb3-7844-4249-99c0-73efc457eaea" containerName="extract-utilities" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.767721 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="369c4fb3-7844-4249-99c0-73efc457eaea" containerName="extract-utilities" Jan 27 22:03:17 crc kubenswrapper[4803]: E0127 22:03:17.767742 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb01363d-5787-4752-8a19-b43ffab76e46" containerName="registry-server" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.767753 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb01363d-5787-4752-8a19-b43ffab76e46" containerName="registry-server" Jan 27 22:03:17 crc kubenswrapper[4803]: E0127 22:03:17.767786 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="369c4fb3-7844-4249-99c0-73efc457eaea" containerName="registry-server" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.767797 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="369c4fb3-7844-4249-99c0-73efc457eaea" containerName="registry-server" Jan 27 22:03:17 crc kubenswrapper[4803]: E0127 22:03:17.767810 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb01363d-5787-4752-8a19-b43ffab76e46" containerName="extract-content" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.767819 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb01363d-5787-4752-8a19-b43ffab76e46" containerName="extract-content" Jan 27 22:03:17 crc kubenswrapper[4803]: E0127 22:03:17.767840 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb01363d-5787-4752-8a19-b43ffab76e46" containerName="extract-utilities" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.767876 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb01363d-5787-4752-8a19-b43ffab76e46" containerName="extract-utilities" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.768119 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="369c4fb3-7844-4249-99c0-73efc457eaea" containerName="registry-server" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.768143 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb01363d-5787-4752-8a19-b43ffab76e46" containerName="registry-server" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.769017 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.772154 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.772698 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.772934 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.773083 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-4zz48" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.773227 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.785260 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.794325 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-wwn5l"] Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.798241 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-datadir\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.798301 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-metrics\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.798361 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58d8p\" (UniqueName: \"kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-kube-api-access-58d8p\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.798390 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-syslog-receiver\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.798441 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-entrypoint\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.798472 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-trusted-ca\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.798524 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-sa-token\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.798550 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-tmp\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.798595 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-token\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.798641 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.798796 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config-openshift-service-cacrt\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.840366 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-wwn5l"] Jan 27 22:03:17 crc kubenswrapper[4803]: E0127 22:03:17.841057 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-58d8p metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-wwn5l" podUID="8187e7a7-6ca9-4277-8cc5-4afe21edfe77" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.899881 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config-openshift-service-cacrt\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.899941 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-datadir\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.899965 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-metrics\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.899991 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58d8p\" (UniqueName: \"kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-kube-api-access-58d8p\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.900008 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-syslog-receiver\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.900030 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-entrypoint\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.900043 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-trusted-ca\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.900071 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-sa-token\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.900086 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-tmp\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.900108 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-token\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.900101 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-datadir\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.900134 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.901050 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config-openshift-service-cacrt\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.901094 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.901200 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-entrypoint\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: E0127 22:03:17.901291 4803 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Jan 27 22:03:17 crc kubenswrapper[4803]: E0127 22:03:17.901354 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-metrics podName:8187e7a7-6ca9-4277-8cc5-4afe21edfe77 nodeName:}" failed. No retries permitted until 2026-01-27 22:03:18.401335689 +0000 UTC m=+950.817357478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-metrics") pod "collector-wwn5l" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77") : secret "collector-metrics" not found Jan 27 22:03:17 crc kubenswrapper[4803]: E0127 22:03:17.901683 4803 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Jan 27 22:03:17 crc kubenswrapper[4803]: E0127 22:03:17.901728 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-syslog-receiver podName:8187e7a7-6ca9-4277-8cc5-4afe21edfe77 nodeName:}" failed. No retries permitted until 2026-01-27 22:03:18.401715149 +0000 UTC m=+950.817736968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-syslog-receiver") pod "collector-wwn5l" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77") : secret "collector-syslog-receiver" not found Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.902596 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-trusted-ca\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.907173 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-tmp\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.907768 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-token\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.919281 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-sa-token\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:17 crc kubenswrapper[4803]: I0127 22:03:17.929307 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58d8p\" (UniqueName: \"kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-kube-api-access-58d8p\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.408440 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-metrics\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.408495 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-syslog-receiver\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.412704 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-metrics\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.413120 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-syslog-receiver\") pod \"collector-wwn5l\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " pod="openshift-logging/collector-wwn5l" Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.785871 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6xwfs"] Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.789459 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.794992 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xwfs"] Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.803963 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-wwn5l" Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.822431 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-wwn5l" Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.915002 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-catalog-content\") pod \"redhat-marketplace-6xwfs\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.915097 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-utilities\") pod \"redhat-marketplace-6xwfs\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:18 crc kubenswrapper[4803]: I0127 22:03:18.915184 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsznw\" (UniqueName: \"kubernetes.io/projected/c430edea-c079-433c-b91c-82e48f84722d-kube-api-access-wsznw\") pod \"redhat-marketplace-6xwfs\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.015901 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58d8p\" (UniqueName: \"kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-kube-api-access-58d8p\") pod \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.016429 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-tmp\") pod \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.016467 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config\") pod \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.016501 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config-openshift-service-cacrt\") pod \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.016537 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-metrics\") pod \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.016561 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-datadir\") pod \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.016576 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-token\") pod \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.016645 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-syslog-receiver\") pod \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.016693 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-sa-token\") pod \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.016747 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-entrypoint\") pod \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.016785 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-trusted-ca\") pod \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\" (UID: \"8187e7a7-6ca9-4277-8cc5-4afe21edfe77\") " Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.016980 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsznw\" (UniqueName: \"kubernetes.io/projected/c430edea-c079-433c-b91c-82e48f84722d-kube-api-access-wsznw\") pod \"redhat-marketplace-6xwfs\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.017019 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config" (OuterVolumeSpecName: "config") pod "8187e7a7-6ca9-4277-8cc5-4afe21edfe77" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.017045 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-catalog-content\") pod \"redhat-marketplace-6xwfs\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.017231 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-utilities\") pod \"redhat-marketplace-6xwfs\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.017454 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.017780 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-utilities\") pod \"redhat-marketplace-6xwfs\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.017814 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-catalog-content\") pod \"redhat-marketplace-6xwfs\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.018249 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-datadir" (OuterVolumeSpecName: "datadir") pod "8187e7a7-6ca9-4277-8cc5-4afe21edfe77" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.018603 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8187e7a7-6ca9-4277-8cc5-4afe21edfe77" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.018686 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "8187e7a7-6ca9-4277-8cc5-4afe21edfe77" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.018743 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "8187e7a7-6ca9-4277-8cc5-4afe21edfe77" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.024274 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-token" (OuterVolumeSpecName: "collector-token") pod "8187e7a7-6ca9-4277-8cc5-4afe21edfe77" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.024359 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-tmp" (OuterVolumeSpecName: "tmp") pod "8187e7a7-6ca9-4277-8cc5-4afe21edfe77" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.024755 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "8187e7a7-6ca9-4277-8cc5-4afe21edfe77" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.026908 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-metrics" (OuterVolumeSpecName: "metrics") pod "8187e7a7-6ca9-4277-8cc5-4afe21edfe77" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.028133 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-kube-api-access-58d8p" (OuterVolumeSpecName: "kube-api-access-58d8p") pod "8187e7a7-6ca9-4277-8cc5-4afe21edfe77" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77"). InnerVolumeSpecName "kube-api-access-58d8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.032350 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-sa-token" (OuterVolumeSpecName: "sa-token") pod "8187e7a7-6ca9-4277-8cc5-4afe21edfe77" (UID: "8187e7a7-6ca9-4277-8cc5-4afe21edfe77"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.034407 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsznw\" (UniqueName: \"kubernetes.io/projected/c430edea-c079-433c-b91c-82e48f84722d-kube-api-access-wsznw\") pod \"redhat-marketplace-6xwfs\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.112235 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.118478 4803 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.118531 4803 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-datadir\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.118542 4803 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-token\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.118551 4803 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.118560 4803 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.118571 4803 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.118578 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.118586 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58d8p\" (UniqueName: \"kubernetes.io/projected/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-kube-api-access-58d8p\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.118594 4803 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-tmp\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.118603 4803 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8187e7a7-6ca9-4277-8cc5-4afe21edfe77-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.517193 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xwfs"] Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.817689 4803 generic.go:334] "Generic (PLEG): container finished" podID="c430edea-c079-433c-b91c-82e48f84722d" containerID="109716005fbe304f96b507117ae98cd6d98acb3c192ca504d687006845fba2d1" exitCode=0 Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.817777 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-wwn5l" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.818230 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xwfs" event={"ID":"c430edea-c079-433c-b91c-82e48f84722d","Type":"ContainerDied","Data":"109716005fbe304f96b507117ae98cd6d98acb3c192ca504d687006845fba2d1"} Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.818287 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xwfs" event={"ID":"c430edea-c079-433c-b91c-82e48f84722d","Type":"ContainerStarted","Data":"399d47d51d28220847327bc2b2573ebe79803ef77c303c1c8d9092f7a12daf2c"} Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.886933 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-wwn5l"] Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.890665 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-wwn5l"] Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.903631 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-dg4sw"] Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.904694 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dg4sw" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.909903 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.910144 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.910789 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.910865 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-4zz48" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.910910 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.912096 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-dg4sw"] Jan 27 22:03:19 crc kubenswrapper[4803]: I0127 22:03:19.915479 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.032283 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0f80595e-9f2c-44d6-af65-29acb22c23d0-sa-token\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.032339 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0f80595e-9f2c-44d6-af65-29acb22c23d0-metrics\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.032378 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0f80595e-9f2c-44d6-af65-29acb22c23d0-collector-token\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.032512 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f80595e-9f2c-44d6-af65-29acb22c23d0-tmp\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.032591 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fjx7\" (UniqueName: \"kubernetes.io/projected/0f80595e-9f2c-44d6-af65-29acb22c23d0-kube-api-access-6fjx7\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.032634 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-config-openshift-service-cacrt\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.032746 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-config\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.033212 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-entrypoint\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.033259 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-trusted-ca\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.033563 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0f80595e-9f2c-44d6-af65-29acb22c23d0-datadir\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.033603 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0f80595e-9f2c-44d6-af65-29acb22c23d0-collector-syslog-receiver\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135199 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fjx7\" (UniqueName: \"kubernetes.io/projected/0f80595e-9f2c-44d6-af65-29acb22c23d0-kube-api-access-6fjx7\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135243 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-config-openshift-service-cacrt\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135279 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-config\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135314 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-entrypoint\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135332 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-trusted-ca\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135368 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0f80595e-9f2c-44d6-af65-29acb22c23d0-datadir\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135384 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0f80595e-9f2c-44d6-af65-29acb22c23d0-collector-syslog-receiver\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135410 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0f80595e-9f2c-44d6-af65-29acb22c23d0-sa-token\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135427 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0f80595e-9f2c-44d6-af65-29acb22c23d0-metrics\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135457 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0f80595e-9f2c-44d6-af65-29acb22c23d0-collector-token\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135490 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f80595e-9f2c-44d6-af65-29acb22c23d0-tmp\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.135500 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0f80595e-9f2c-44d6-af65-29acb22c23d0-datadir\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.136073 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-config-openshift-service-cacrt\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.136449 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-trusted-ca\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.136907 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-config\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.137125 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0f80595e-9f2c-44d6-af65-29acb22c23d0-entrypoint\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.141600 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0f80595e-9f2c-44d6-af65-29acb22c23d0-metrics\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.141632 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0f80595e-9f2c-44d6-af65-29acb22c23d0-collector-syslog-receiver\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.142063 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0f80595e-9f2c-44d6-af65-29acb22c23d0-collector-token\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.143313 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f80595e-9f2c-44d6-af65-29acb22c23d0-tmp\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.151523 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fjx7\" (UniqueName: \"kubernetes.io/projected/0f80595e-9f2c-44d6-af65-29acb22c23d0-kube-api-access-6fjx7\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.152222 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0f80595e-9f2c-44d6-af65-29acb22c23d0-sa-token\") pod \"collector-dg4sw\" (UID: \"0f80595e-9f2c-44d6-af65-29acb22c23d0\") " pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.237549 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dg4sw" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.316888 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8187e7a7-6ca9-4277-8cc5-4afe21edfe77" path="/var/lib/kubelet/pods/8187e7a7-6ca9-4277-8cc5-4afe21edfe77/volumes" Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.643714 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-dg4sw"] Jan 27 22:03:20 crc kubenswrapper[4803]: W0127 22:03:20.648453 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f80595e_9f2c_44d6_af65_29acb22c23d0.slice/crio-068b8aca6d32d55aa622c37b1f1c03ab6ac42a1c482d0a6fca4e664a750a254c WatchSource:0}: Error finding container 068b8aca6d32d55aa622c37b1f1c03ab6ac42a1c482d0a6fca4e664a750a254c: Status 404 returned error can't find the container with id 068b8aca6d32d55aa622c37b1f1c03ab6ac42a1c482d0a6fca4e664a750a254c Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.829320 4803 generic.go:334] "Generic (PLEG): container finished" podID="c430edea-c079-433c-b91c-82e48f84722d" containerID="e4ad7a6168a1977b24d7da0bee75d901423eb63f64e794c4a326197903dc862d" exitCode=0 Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.829382 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xwfs" event={"ID":"c430edea-c079-433c-b91c-82e48f84722d","Type":"ContainerDied","Data":"e4ad7a6168a1977b24d7da0bee75d901423eb63f64e794c4a326197903dc862d"} Jan 27 22:03:20 crc kubenswrapper[4803]: I0127 22:03:20.831006 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-dg4sw" event={"ID":"0f80595e-9f2c-44d6-af65-29acb22c23d0","Type":"ContainerStarted","Data":"068b8aca6d32d55aa622c37b1f1c03ab6ac42a1c482d0a6fca4e664a750a254c"} Jan 27 22:03:21 crc kubenswrapper[4803]: I0127 22:03:21.840070 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xwfs" event={"ID":"c430edea-c079-433c-b91c-82e48f84722d","Type":"ContainerStarted","Data":"a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4"} Jan 27 22:03:21 crc kubenswrapper[4803]: I0127 22:03:21.865978 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6xwfs" podStartSLOduration=2.472002835 podStartE2EDuration="3.865957002s" podCreationTimestamp="2026-01-27 22:03:18 +0000 UTC" firstStartedPulling="2026-01-27 22:03:19.820458774 +0000 UTC m=+952.236480473" lastFinishedPulling="2026-01-27 22:03:21.214412911 +0000 UTC m=+953.630434640" observedRunningTime="2026-01-27 22:03:21.862209741 +0000 UTC m=+954.278231450" watchObservedRunningTime="2026-01-27 22:03:21.865957002 +0000 UTC m=+954.281978711" Jan 27 22:03:28 crc kubenswrapper[4803]: I0127 22:03:28.893517 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-dg4sw" event={"ID":"0f80595e-9f2c-44d6-af65-29acb22c23d0","Type":"ContainerStarted","Data":"4555416235af1934fb771ab5a38fb47f03fcb17febafdd80da6287223ce444c0"} Jan 27 22:03:28 crc kubenswrapper[4803]: I0127 22:03:28.926241 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-dg4sw" podStartSLOduration=2.724148704 podStartE2EDuration="9.926215806s" podCreationTimestamp="2026-01-27 22:03:19 +0000 UTC" firstStartedPulling="2026-01-27 22:03:20.650572577 +0000 UTC m=+953.066594266" lastFinishedPulling="2026-01-27 22:03:27.852639669 +0000 UTC m=+960.268661368" observedRunningTime="2026-01-27 22:03:28.922429183 +0000 UTC m=+961.338450902" watchObservedRunningTime="2026-01-27 22:03:28.926215806 +0000 UTC m=+961.342237505" Jan 27 22:03:29 crc kubenswrapper[4803]: I0127 22:03:29.112951 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:29 crc kubenswrapper[4803]: I0127 22:03:29.112997 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:29 crc kubenswrapper[4803]: I0127 22:03:29.163809 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:29 crc kubenswrapper[4803]: I0127 22:03:29.938187 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:29 crc kubenswrapper[4803]: I0127 22:03:29.977556 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xwfs"] Jan 27 22:03:31 crc kubenswrapper[4803]: I0127 22:03:31.932741 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6xwfs" podUID="c430edea-c079-433c-b91c-82e48f84722d" containerName="registry-server" containerID="cri-o://a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4" gracePeriod=2 Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.331987 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.502450 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-catalog-content\") pod \"c430edea-c079-433c-b91c-82e48f84722d\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.502618 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsznw\" (UniqueName: \"kubernetes.io/projected/c430edea-c079-433c-b91c-82e48f84722d-kube-api-access-wsznw\") pod \"c430edea-c079-433c-b91c-82e48f84722d\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.502669 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-utilities\") pod \"c430edea-c079-433c-b91c-82e48f84722d\" (UID: \"c430edea-c079-433c-b91c-82e48f84722d\") " Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.503753 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-utilities" (OuterVolumeSpecName: "utilities") pod "c430edea-c079-433c-b91c-82e48f84722d" (UID: "c430edea-c079-433c-b91c-82e48f84722d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.508084 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c430edea-c079-433c-b91c-82e48f84722d-kube-api-access-wsznw" (OuterVolumeSpecName: "kube-api-access-wsznw") pod "c430edea-c079-433c-b91c-82e48f84722d" (UID: "c430edea-c079-433c-b91c-82e48f84722d"). InnerVolumeSpecName "kube-api-access-wsznw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.524739 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c430edea-c079-433c-b91c-82e48f84722d" (UID: "c430edea-c079-433c-b91c-82e48f84722d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.604657 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.605015 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c430edea-c079-433c-b91c-82e48f84722d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.605029 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsznw\" (UniqueName: \"kubernetes.io/projected/c430edea-c079-433c-b91c-82e48f84722d-kube-api-access-wsznw\") on node \"crc\" DevicePath \"\"" Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.939970 4803 generic.go:334] "Generic (PLEG): container finished" podID="c430edea-c079-433c-b91c-82e48f84722d" containerID="a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4" exitCode=0 Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.940033 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xwfs" Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.940033 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xwfs" event={"ID":"c430edea-c079-433c-b91c-82e48f84722d","Type":"ContainerDied","Data":"a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4"} Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.940108 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xwfs" event={"ID":"c430edea-c079-433c-b91c-82e48f84722d","Type":"ContainerDied","Data":"399d47d51d28220847327bc2b2573ebe79803ef77c303c1c8d9092f7a12daf2c"} Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.940143 4803 scope.go:117] "RemoveContainer" containerID="a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4" Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.958110 4803 scope.go:117] "RemoveContainer" containerID="e4ad7a6168a1977b24d7da0bee75d901423eb63f64e794c4a326197903dc862d" Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.971997 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xwfs"] Jan 27 22:03:32 crc kubenswrapper[4803]: I0127 22:03:32.977675 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xwfs"] Jan 27 22:03:33 crc kubenswrapper[4803]: I0127 22:03:33.003032 4803 scope.go:117] "RemoveContainer" containerID="109716005fbe304f96b507117ae98cd6d98acb3c192ca504d687006845fba2d1" Jan 27 22:03:33 crc kubenswrapper[4803]: I0127 22:03:33.022135 4803 scope.go:117] "RemoveContainer" containerID="a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4" Jan 27 22:03:33 crc kubenswrapper[4803]: E0127 22:03:33.022594 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4\": container with ID starting with a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4 not found: ID does not exist" containerID="a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4" Jan 27 22:03:33 crc kubenswrapper[4803]: I0127 22:03:33.022648 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4"} err="failed to get container status \"a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4\": rpc error: code = NotFound desc = could not find container \"a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4\": container with ID starting with a201803eb5c411ed6effc4cca4c551abf66d4d71dd6336688cdc45b47c506aa4 not found: ID does not exist" Jan 27 22:03:33 crc kubenswrapper[4803]: I0127 22:03:33.022682 4803 scope.go:117] "RemoveContainer" containerID="e4ad7a6168a1977b24d7da0bee75d901423eb63f64e794c4a326197903dc862d" Jan 27 22:03:33 crc kubenswrapper[4803]: E0127 22:03:33.023034 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4ad7a6168a1977b24d7da0bee75d901423eb63f64e794c4a326197903dc862d\": container with ID starting with e4ad7a6168a1977b24d7da0bee75d901423eb63f64e794c4a326197903dc862d not found: ID does not exist" containerID="e4ad7a6168a1977b24d7da0bee75d901423eb63f64e794c4a326197903dc862d" Jan 27 22:03:33 crc kubenswrapper[4803]: I0127 22:03:33.023076 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ad7a6168a1977b24d7da0bee75d901423eb63f64e794c4a326197903dc862d"} err="failed to get container status \"e4ad7a6168a1977b24d7da0bee75d901423eb63f64e794c4a326197903dc862d\": rpc error: code = NotFound desc = could not find container \"e4ad7a6168a1977b24d7da0bee75d901423eb63f64e794c4a326197903dc862d\": container with ID starting with e4ad7a6168a1977b24d7da0bee75d901423eb63f64e794c4a326197903dc862d not found: ID does not exist" Jan 27 22:03:33 crc kubenswrapper[4803]: I0127 22:03:33.023103 4803 scope.go:117] "RemoveContainer" containerID="109716005fbe304f96b507117ae98cd6d98acb3c192ca504d687006845fba2d1" Jan 27 22:03:33 crc kubenswrapper[4803]: E0127 22:03:33.023490 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109716005fbe304f96b507117ae98cd6d98acb3c192ca504d687006845fba2d1\": container with ID starting with 109716005fbe304f96b507117ae98cd6d98acb3c192ca504d687006845fba2d1 not found: ID does not exist" containerID="109716005fbe304f96b507117ae98cd6d98acb3c192ca504d687006845fba2d1" Jan 27 22:03:33 crc kubenswrapper[4803]: I0127 22:03:33.023527 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109716005fbe304f96b507117ae98cd6d98acb3c192ca504d687006845fba2d1"} err="failed to get container status \"109716005fbe304f96b507117ae98cd6d98acb3c192ca504d687006845fba2d1\": rpc error: code = NotFound desc = could not find container \"109716005fbe304f96b507117ae98cd6d98acb3c192ca504d687006845fba2d1\": container with ID starting with 109716005fbe304f96b507117ae98cd6d98acb3c192ca504d687006845fba2d1 not found: ID does not exist" Jan 27 22:03:34 crc kubenswrapper[4803]: I0127 22:03:34.314473 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c430edea-c079-433c-b91c-82e48f84722d" path="/var/lib/kubelet/pods/c430edea-c079-433c-b91c-82e48f84722d/volumes" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.171372 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl"] Jan 27 22:03:59 crc kubenswrapper[4803]: E0127 22:03:59.172196 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c430edea-c079-433c-b91c-82e48f84722d" containerName="registry-server" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.172213 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c430edea-c079-433c-b91c-82e48f84722d" containerName="registry-server" Jan 27 22:03:59 crc kubenswrapper[4803]: E0127 22:03:59.172230 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c430edea-c079-433c-b91c-82e48f84722d" containerName="extract-utilities" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.172239 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c430edea-c079-433c-b91c-82e48f84722d" containerName="extract-utilities" Jan 27 22:03:59 crc kubenswrapper[4803]: E0127 22:03:59.172269 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c430edea-c079-433c-b91c-82e48f84722d" containerName="extract-content" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.172277 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c430edea-c079-433c-b91c-82e48f84722d" containerName="extract-content" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.172417 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="c430edea-c079-433c-b91c-82e48f84722d" containerName="registry-server" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.173411 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.176135 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.182654 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl"] Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.226623 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.226992 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.227042 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lplj5\" (UniqueName: \"kubernetes.io/projected/43e3512b-91c4-4472-851f-20dffb5b2b19-kube-api-access-lplj5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.328162 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.328526 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lplj5\" (UniqueName: \"kubernetes.io/projected/43e3512b-91c4-4472-851f-20dffb5b2b19-kube-api-access-lplj5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.328574 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.328703 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.329075 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.358019 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lplj5\" (UniqueName: \"kubernetes.io/projected/43e3512b-91c4-4472-851f-20dffb5b2b19-kube-api-access-lplj5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.489541 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:03:59 crc kubenswrapper[4803]: I0127 22:03:59.879632 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl"] Jan 27 22:04:00 crc kubenswrapper[4803]: I0127 22:04:00.154420 4803 generic.go:334] "Generic (PLEG): container finished" podID="43e3512b-91c4-4472-851f-20dffb5b2b19" containerID="862d302ed45b5b996e01ab10acfd109e076254a0b8581d4869177a19d230f80a" exitCode=0 Jan 27 22:04:00 crc kubenswrapper[4803]: I0127 22:04:00.154524 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" event={"ID":"43e3512b-91c4-4472-851f-20dffb5b2b19","Type":"ContainerDied","Data":"862d302ed45b5b996e01ab10acfd109e076254a0b8581d4869177a19d230f80a"} Jan 27 22:04:00 crc kubenswrapper[4803]: I0127 22:04:00.154723 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" event={"ID":"43e3512b-91c4-4472-851f-20dffb5b2b19","Type":"ContainerStarted","Data":"69e6f1a29bcc7594b82ed741c19da200862b53fb8b8bf175214d5a95f714dd53"} Jan 27 22:04:02 crc kubenswrapper[4803]: I0127 22:04:02.173830 4803 generic.go:334] "Generic (PLEG): container finished" podID="43e3512b-91c4-4472-851f-20dffb5b2b19" containerID="02f8152ddc26084eb6074c397511d199e1994eb378022bccd8dc01dc16103530" exitCode=0 Jan 27 22:04:02 crc kubenswrapper[4803]: I0127 22:04:02.173891 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" event={"ID":"43e3512b-91c4-4472-851f-20dffb5b2b19","Type":"ContainerDied","Data":"02f8152ddc26084eb6074c397511d199e1994eb378022bccd8dc01dc16103530"} Jan 27 22:04:03 crc kubenswrapper[4803]: I0127 22:04:03.184161 4803 generic.go:334] "Generic (PLEG): container finished" podID="43e3512b-91c4-4472-851f-20dffb5b2b19" containerID="2b4d147b619c4bb31493c76f39eb6a3d19e44a66cb84a123fb634a00f06b905e" exitCode=0 Jan 27 22:04:03 crc kubenswrapper[4803]: I0127 22:04:03.184232 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" event={"ID":"43e3512b-91c4-4472-851f-20dffb5b2b19","Type":"ContainerDied","Data":"2b4d147b619c4bb31493c76f39eb6a3d19e44a66cb84a123fb634a00f06b905e"} Jan 27 22:04:04 crc kubenswrapper[4803]: I0127 22:04:04.512594 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:04:04 crc kubenswrapper[4803]: I0127 22:04:04.712479 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-util\") pod \"43e3512b-91c4-4472-851f-20dffb5b2b19\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " Jan 27 22:04:04 crc kubenswrapper[4803]: I0127 22:04:04.712662 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-bundle\") pod \"43e3512b-91c4-4472-851f-20dffb5b2b19\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " Jan 27 22:04:04 crc kubenswrapper[4803]: I0127 22:04:04.712769 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lplj5\" (UniqueName: \"kubernetes.io/projected/43e3512b-91c4-4472-851f-20dffb5b2b19-kube-api-access-lplj5\") pod \"43e3512b-91c4-4472-851f-20dffb5b2b19\" (UID: \"43e3512b-91c4-4472-851f-20dffb5b2b19\") " Jan 27 22:04:04 crc kubenswrapper[4803]: I0127 22:04:04.713299 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-bundle" (OuterVolumeSpecName: "bundle") pod "43e3512b-91c4-4472-851f-20dffb5b2b19" (UID: "43e3512b-91c4-4472-851f-20dffb5b2b19"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:04:04 crc kubenswrapper[4803]: I0127 22:04:04.722042 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e3512b-91c4-4472-851f-20dffb5b2b19-kube-api-access-lplj5" (OuterVolumeSpecName: "kube-api-access-lplj5") pod "43e3512b-91c4-4472-851f-20dffb5b2b19" (UID: "43e3512b-91c4-4472-851f-20dffb5b2b19"). InnerVolumeSpecName "kube-api-access-lplj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:04:04 crc kubenswrapper[4803]: I0127 22:04:04.726954 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-util" (OuterVolumeSpecName: "util") pod "43e3512b-91c4-4472-851f-20dffb5b2b19" (UID: "43e3512b-91c4-4472-851f-20dffb5b2b19"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:04:04 crc kubenswrapper[4803]: I0127 22:04:04.814392 4803 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:04:04 crc kubenswrapper[4803]: I0127 22:04:04.814423 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lplj5\" (UniqueName: \"kubernetes.io/projected/43e3512b-91c4-4472-851f-20dffb5b2b19-kube-api-access-lplj5\") on node \"crc\" DevicePath \"\"" Jan 27 22:04:04 crc kubenswrapper[4803]: I0127 22:04:04.814432 4803 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43e3512b-91c4-4472-851f-20dffb5b2b19-util\") on node \"crc\" DevicePath \"\"" Jan 27 22:04:05 crc kubenswrapper[4803]: I0127 22:04:05.205730 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" event={"ID":"43e3512b-91c4-4472-851f-20dffb5b2b19","Type":"ContainerDied","Data":"69e6f1a29bcc7594b82ed741c19da200862b53fb8b8bf175214d5a95f714dd53"} Jan 27 22:04:05 crc kubenswrapper[4803]: I0127 22:04:05.205791 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e6f1a29bcc7594b82ed741c19da200862b53fb8b8bf175214d5a95f714dd53" Jan 27 22:04:05 crc kubenswrapper[4803]: I0127 22:04:05.205832 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.066532 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bdqpw"] Jan 27 22:04:11 crc kubenswrapper[4803]: E0127 22:04:11.069628 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e3512b-91c4-4472-851f-20dffb5b2b19" containerName="pull" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.069742 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e3512b-91c4-4472-851f-20dffb5b2b19" containerName="pull" Jan 27 22:04:11 crc kubenswrapper[4803]: E0127 22:04:11.069879 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e3512b-91c4-4472-851f-20dffb5b2b19" containerName="util" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.069993 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e3512b-91c4-4472-851f-20dffb5b2b19" containerName="util" Jan 27 22:04:11 crc kubenswrapper[4803]: E0127 22:04:11.070161 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e3512b-91c4-4472-851f-20dffb5b2b19" containerName="extract" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.070267 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e3512b-91c4-4472-851f-20dffb5b2b19" containerName="extract" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.070627 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e3512b-91c4-4472-851f-20dffb5b2b19" containerName="extract" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.071501 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bdqpw" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.076554 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.076597 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.077379 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-b2mzj" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.086799 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bdqpw"] Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.112119 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdl94\" (UniqueName: \"kubernetes.io/projected/7626f07b-4412-434f-87b9-406475aa7a90-kube-api-access-qdl94\") pod \"nmstate-operator-646758c888-bdqpw\" (UID: \"7626f07b-4412-434f-87b9-406475aa7a90\") " pod="openshift-nmstate/nmstate-operator-646758c888-bdqpw" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.214229 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdl94\" (UniqueName: \"kubernetes.io/projected/7626f07b-4412-434f-87b9-406475aa7a90-kube-api-access-qdl94\") pod \"nmstate-operator-646758c888-bdqpw\" (UID: \"7626f07b-4412-434f-87b9-406475aa7a90\") " pod="openshift-nmstate/nmstate-operator-646758c888-bdqpw" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.237936 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdl94\" (UniqueName: \"kubernetes.io/projected/7626f07b-4412-434f-87b9-406475aa7a90-kube-api-access-qdl94\") pod \"nmstate-operator-646758c888-bdqpw\" (UID: \"7626f07b-4412-434f-87b9-406475aa7a90\") " pod="openshift-nmstate/nmstate-operator-646758c888-bdqpw" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.394662 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bdqpw" Jan 27 22:04:11 crc kubenswrapper[4803]: I0127 22:04:11.861021 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bdqpw"] Jan 27 22:04:12 crc kubenswrapper[4803]: I0127 22:04:12.253495 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bdqpw" event={"ID":"7626f07b-4412-434f-87b9-406475aa7a90","Type":"ContainerStarted","Data":"5baa42fa6ea1e466855c054705ed55b7e201372bafc03a8260b8ae070c7fa594"} Jan 27 22:04:15 crc kubenswrapper[4803]: I0127 22:04:15.275876 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bdqpw" event={"ID":"7626f07b-4412-434f-87b9-406475aa7a90","Type":"ContainerStarted","Data":"b5b01c5cc80f3591272f3f292a0df9896e08a0866c2ccc5b2a1b606c37ab03bb"} Jan 27 22:04:15 crc kubenswrapper[4803]: I0127 22:04:15.294285 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-bdqpw" podStartSLOduration=1.8715795210000001 podStartE2EDuration="4.294264743s" podCreationTimestamp="2026-01-27 22:04:11 +0000 UTC" firstStartedPulling="2026-01-27 22:04:11.869681791 +0000 UTC m=+1004.285703510" lastFinishedPulling="2026-01-27 22:04:14.292367033 +0000 UTC m=+1006.708388732" observedRunningTime="2026-01-27 22:04:15.290653035 +0000 UTC m=+1007.706674744" watchObservedRunningTime="2026-01-27 22:04:15.294264743 +0000 UTC m=+1007.710286452" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.326636 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mt2x7"] Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.329339 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mt2x7" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.341262 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mt2x7"] Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.343899 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8mlzm" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.347983 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm"] Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.348771 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.350465 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.353383 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mpcd\" (UniqueName: \"kubernetes.io/projected/bd2efa75-5c9b-4f23-a284-9f69ae3587af-kube-api-access-4mpcd\") pod \"nmstate-metrics-54757c584b-mt2x7\" (UID: \"bd2efa75-5c9b-4f23-a284-9f69ae3587af\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mt2x7" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.353647 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/77dd058d-f38b-4382-923d-f68fbb3c9566-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bqlpm\" (UID: \"77dd058d-f38b-4382-923d-f68fbb3c9566\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.353732 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-972bn\" (UniqueName: \"kubernetes.io/projected/77dd058d-f38b-4382-923d-f68fbb3c9566-kube-api-access-972bn\") pod \"nmstate-webhook-8474b5b9d8-bqlpm\" (UID: \"77dd058d-f38b-4382-923d-f68fbb3c9566\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.360467 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wrzxs"] Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.361982 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.397612 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm"] Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.457888 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mpcd\" (UniqueName: \"kubernetes.io/projected/bd2efa75-5c9b-4f23-a284-9f69ae3587af-kube-api-access-4mpcd\") pod \"nmstate-metrics-54757c584b-mt2x7\" (UID: \"bd2efa75-5c9b-4f23-a284-9f69ae3587af\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mt2x7" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.458205 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/77dd058d-f38b-4382-923d-f68fbb3c9566-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bqlpm\" (UID: \"77dd058d-f38b-4382-923d-f68fbb3c9566\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.458229 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-972bn\" (UniqueName: \"kubernetes.io/projected/77dd058d-f38b-4382-923d-f68fbb3c9566-kube-api-access-972bn\") pod \"nmstate-webhook-8474b5b9d8-bqlpm\" (UID: \"77dd058d-f38b-4382-923d-f68fbb3c9566\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.458282 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2znvv\" (UniqueName: \"kubernetes.io/projected/89a353b4-798b-4f55-91ff-316a9840a7bb-kube-api-access-2znvv\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.458313 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/89a353b4-798b-4f55-91ff-316a9840a7bb-nmstate-lock\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: E0127 22:04:20.458326 4803 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.458342 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/89a353b4-798b-4f55-91ff-316a9840a7bb-dbus-socket\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: E0127 22:04:20.458392 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77dd058d-f38b-4382-923d-f68fbb3c9566-tls-key-pair podName:77dd058d-f38b-4382-923d-f68fbb3c9566 nodeName:}" failed. No retries permitted until 2026-01-27 22:04:20.958371891 +0000 UTC m=+1013.374393670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/77dd058d-f38b-4382-923d-f68fbb3c9566-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-bqlpm" (UID: "77dd058d-f38b-4382-923d-f68fbb3c9566") : secret "openshift-nmstate-webhook" not found Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.458434 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/89a353b4-798b-4f55-91ff-316a9840a7bb-ovs-socket\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.493820 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mpcd\" (UniqueName: \"kubernetes.io/projected/bd2efa75-5c9b-4f23-a284-9f69ae3587af-kube-api-access-4mpcd\") pod \"nmstate-metrics-54757c584b-mt2x7\" (UID: \"bd2efa75-5c9b-4f23-a284-9f69ae3587af\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mt2x7" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.501501 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-972bn\" (UniqueName: \"kubernetes.io/projected/77dd058d-f38b-4382-923d-f68fbb3c9566-kube-api-access-972bn\") pod \"nmstate-webhook-8474b5b9d8-bqlpm\" (UID: \"77dd058d-f38b-4382-923d-f68fbb3c9566\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.564155 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq"] Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.565053 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.567115 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/89a353b4-798b-4f55-91ff-316a9840a7bb-dbus-socket\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.567175 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/89a353b4-798b-4f55-91ff-316a9840a7bb-ovs-socket\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.567297 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2znvv\" (UniqueName: \"kubernetes.io/projected/89a353b4-798b-4f55-91ff-316a9840a7bb-kube-api-access-2znvv\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.567354 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/89a353b4-798b-4f55-91ff-316a9840a7bb-nmstate-lock\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.567450 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/89a353b4-798b-4f55-91ff-316a9840a7bb-nmstate-lock\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.567692 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/89a353b4-798b-4f55-91ff-316a9840a7bb-dbus-socket\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.567733 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/89a353b4-798b-4f55-91ff-316a9840a7bb-ovs-socket\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.572544 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq"] Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.578291 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.578452 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.578499 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-mwpvn" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.604399 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2znvv\" (UniqueName: \"kubernetes.io/projected/89a353b4-798b-4f55-91ff-316a9840a7bb-kube-api-access-2znvv\") pod \"nmstate-handler-wrzxs\" (UID: \"89a353b4-798b-4f55-91ff-316a9840a7bb\") " pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.669602 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r264g\" (UniqueName: \"kubernetes.io/projected/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-kube-api-access-r264g\") pod \"nmstate-console-plugin-7754f76f8b-7kfnq\" (UID: \"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.670544 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7kfnq\" (UID: \"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.670964 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7kfnq\" (UID: \"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.756460 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-99c48dff5-sj7f4"] Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.757567 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.757818 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mt2x7" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.772727 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7kfnq\" (UID: \"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.772789 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-console-config\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.772808 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvwsk\" (UniqueName: \"kubernetes.io/projected/e62b2a29-1e10-4064-93da-24b6d5e88397-kube-api-access-pvwsk\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.772856 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r264g\" (UniqueName: \"kubernetes.io/projected/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-kube-api-access-r264g\") pod \"nmstate-console-plugin-7754f76f8b-7kfnq\" (UID: \"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.772914 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-oauth-config\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.772933 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7kfnq\" (UID: \"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.772952 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-service-ca\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.772973 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-serving-cert\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.772989 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-oauth-serving-cert\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.773041 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-trusted-ca-bundle\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: E0127 22:04:20.773189 4803 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 22:04:20 crc kubenswrapper[4803]: E0127 22:04:20.773238 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-plugin-serving-cert podName:e9e0ba93-d76c-4c79-ac8e-cb250366ce7a nodeName:}" failed. No retries permitted until 2026-01-27 22:04:21.273223852 +0000 UTC m=+1013.689245551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-7kfnq" (UID: "e9e0ba93-d76c-4c79-ac8e-cb250366ce7a") : secret "plugin-serving-cert" not found Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.773758 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7kfnq\" (UID: \"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.818341 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r264g\" (UniqueName: \"kubernetes.io/projected/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-kube-api-access-r264g\") pod \"nmstate-console-plugin-7754f76f8b-7kfnq\" (UID: \"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.834936 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.835385 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-99c48dff5-sj7f4"] Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.874269 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-console-config\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.874525 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvwsk\" (UniqueName: \"kubernetes.io/projected/e62b2a29-1e10-4064-93da-24b6d5e88397-kube-api-access-pvwsk\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.874582 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-oauth-config\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.874600 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-service-ca\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.874624 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-serving-cert\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.874640 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-oauth-serving-cert\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.874695 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-trusted-ca-bundle\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.876076 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-oauth-serving-cert\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.876342 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-trusted-ca-bundle\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.877371 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-service-ca\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.879416 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-oauth-config\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.882140 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-serving-cert\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.882642 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-console-config\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.894208 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvwsk\" (UniqueName: \"kubernetes.io/projected/e62b2a29-1e10-4064-93da-24b6d5e88397-kube-api-access-pvwsk\") pod \"console-99c48dff5-sj7f4\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.975969 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/77dd058d-f38b-4382-923d-f68fbb3c9566-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bqlpm\" (UID: \"77dd058d-f38b-4382-923d-f68fbb3c9566\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 22:04:20 crc kubenswrapper[4803]: I0127 22:04:20.979792 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/77dd058d-f38b-4382-923d-f68fbb3c9566-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bqlpm\" (UID: \"77dd058d-f38b-4382-923d-f68fbb3c9566\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 22:04:21 crc kubenswrapper[4803]: I0127 22:04:21.077163 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 22:04:21 crc kubenswrapper[4803]: I0127 22:04:21.133554 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:21 crc kubenswrapper[4803]: I0127 22:04:21.261437 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mt2x7"] Jan 27 22:04:21 crc kubenswrapper[4803]: W0127 22:04:21.269105 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd2efa75_5c9b_4f23_a284_9f69ae3587af.slice/crio-31af93685e9d1daee990bc46e705ee0e99462908909a2ca2f939706750c8f858 WatchSource:0}: Error finding container 31af93685e9d1daee990bc46e705ee0e99462908909a2ca2f939706750c8f858: Status 404 returned error can't find the container with id 31af93685e9d1daee990bc46e705ee0e99462908909a2ca2f939706750c8f858 Jan 27 22:04:21 crc kubenswrapper[4803]: I0127 22:04:21.279446 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7kfnq\" (UID: \"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:21 crc kubenswrapper[4803]: I0127 22:04:21.284689 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9e0ba93-d76c-4c79-ac8e-cb250366ce7a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7kfnq\" (UID: \"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:21 crc kubenswrapper[4803]: I0127 22:04:21.311167 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm"] Jan 27 22:04:21 crc kubenswrapper[4803]: W0127 22:04:21.316318 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77dd058d_f38b_4382_923d_f68fbb3c9566.slice/crio-028f0f25cb0d84171a119276dfc21cebb15d58916903f4fb4370c00397d3b239 WatchSource:0}: Error finding container 028f0f25cb0d84171a119276dfc21cebb15d58916903f4fb4370c00397d3b239: Status 404 returned error can't find the container with id 028f0f25cb0d84171a119276dfc21cebb15d58916903f4fb4370c00397d3b239 Jan 27 22:04:21 crc kubenswrapper[4803]: I0127 22:04:21.319594 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mt2x7" event={"ID":"bd2efa75-5c9b-4f23-a284-9f69ae3587af","Type":"ContainerStarted","Data":"31af93685e9d1daee990bc46e705ee0e99462908909a2ca2f939706750c8f858"} Jan 27 22:04:21 crc kubenswrapper[4803]: I0127 22:04:21.323215 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wrzxs" event={"ID":"89a353b4-798b-4f55-91ff-316a9840a7bb","Type":"ContainerStarted","Data":"92bd3a21ea6b78e1c0f025b772cf97f04789cdeb50e38ff071eafaae3e438721"} Jan 27 22:04:21 crc kubenswrapper[4803]: I0127 22:04:21.487231 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" Jan 27 22:04:21 crc kubenswrapper[4803]: I0127 22:04:21.631591 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-99c48dff5-sj7f4"] Jan 27 22:04:21 crc kubenswrapper[4803]: I0127 22:04:21.954708 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq"] Jan 27 22:04:22 crc kubenswrapper[4803]: I0127 22:04:22.379079 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" event={"ID":"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a","Type":"ContainerStarted","Data":"1c15d62fb1688353b0f57ddad531e59d3be010eb1fbf85e5f1a7a41f1894473d"} Jan 27 22:04:22 crc kubenswrapper[4803]: I0127 22:04:22.384503 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-99c48dff5-sj7f4" event={"ID":"e62b2a29-1e10-4064-93da-24b6d5e88397","Type":"ContainerStarted","Data":"93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2"} Jan 27 22:04:22 crc kubenswrapper[4803]: I0127 22:04:22.384543 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-99c48dff5-sj7f4" event={"ID":"e62b2a29-1e10-4064-93da-24b6d5e88397","Type":"ContainerStarted","Data":"0c8a83fecdb0017674feb5f5972115987cf4f7b9bd8163111013fb89556df6a6"} Jan 27 22:04:22 crc kubenswrapper[4803]: I0127 22:04:22.393101 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" event={"ID":"77dd058d-f38b-4382-923d-f68fbb3c9566","Type":"ContainerStarted","Data":"028f0f25cb0d84171a119276dfc21cebb15d58916903f4fb4370c00397d3b239"} Jan 27 22:04:22 crc kubenswrapper[4803]: I0127 22:04:22.430854 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-99c48dff5-sj7f4" podStartSLOduration=2.430824287 podStartE2EDuration="2.430824287s" podCreationTimestamp="2026-01-27 22:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:04:22.427519718 +0000 UTC m=+1014.843541437" watchObservedRunningTime="2026-01-27 22:04:22.430824287 +0000 UTC m=+1014.846845986" Jan 27 22:04:24 crc kubenswrapper[4803]: I0127 22:04:24.410464 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" event={"ID":"e9e0ba93-d76c-4c79-ac8e-cb250366ce7a","Type":"ContainerStarted","Data":"fb6eb8c00a4d43e2de3f44928d8a6b3f1e6e0274cfd72a6fa1b0f8c946639b37"} Jan 27 22:04:24 crc kubenswrapper[4803]: I0127 22:04:24.413518 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mt2x7" event={"ID":"bd2efa75-5c9b-4f23-a284-9f69ae3587af","Type":"ContainerStarted","Data":"eea7730000b5fbeb3d4c6e65c51df47474826615a7d1a7d08c1bc7c434cf5a70"} Jan 27 22:04:24 crc kubenswrapper[4803]: I0127 22:04:24.415869 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wrzxs" event={"ID":"89a353b4-798b-4f55-91ff-316a9840a7bb","Type":"ContainerStarted","Data":"77619d28b5e7406fe8f8f32bc267884173a9b7a8bfb31044d50e37bb5cad9bb6"} Jan 27 22:04:24 crc kubenswrapper[4803]: I0127 22:04:24.415922 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:24 crc kubenswrapper[4803]: I0127 22:04:24.419173 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" event={"ID":"77dd058d-f38b-4382-923d-f68fbb3c9566","Type":"ContainerStarted","Data":"e2687e865a3875998b72737340794dbf47476f4d31da6278f924f45a820dcf78"} Jan 27 22:04:24 crc kubenswrapper[4803]: I0127 22:04:24.419818 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 22:04:24 crc kubenswrapper[4803]: I0127 22:04:24.438974 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7kfnq" podStartSLOduration=2.266597453 podStartE2EDuration="4.438956035s" podCreationTimestamp="2026-01-27 22:04:20 +0000 UTC" firstStartedPulling="2026-01-27 22:04:21.959469131 +0000 UTC m=+1014.375490830" lastFinishedPulling="2026-01-27 22:04:24.131827723 +0000 UTC m=+1016.547849412" observedRunningTime="2026-01-27 22:04:24.432311775 +0000 UTC m=+1016.848333484" watchObservedRunningTime="2026-01-27 22:04:24.438956035 +0000 UTC m=+1016.854977734" Jan 27 22:04:24 crc kubenswrapper[4803]: I0127 22:04:24.456888 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" podStartSLOduration=1.645977826 podStartE2EDuration="4.456829068s" podCreationTimestamp="2026-01-27 22:04:20 +0000 UTC" firstStartedPulling="2026-01-27 22:04:21.320170679 +0000 UTC m=+1013.736192378" lastFinishedPulling="2026-01-27 22:04:24.131021911 +0000 UTC m=+1016.547043620" observedRunningTime="2026-01-27 22:04:24.455298157 +0000 UTC m=+1016.871319866" watchObservedRunningTime="2026-01-27 22:04:24.456829068 +0000 UTC m=+1016.872850767" Jan 27 22:04:24 crc kubenswrapper[4803]: I0127 22:04:24.477261 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wrzxs" podStartSLOduration=1.234227138 podStartE2EDuration="4.477243509s" podCreationTimestamp="2026-01-27 22:04:20 +0000 UTC" firstStartedPulling="2026-01-27 22:04:20.886025727 +0000 UTC m=+1013.302047426" lastFinishedPulling="2026-01-27 22:04:24.129042098 +0000 UTC m=+1016.545063797" observedRunningTime="2026-01-27 22:04:24.47207708 +0000 UTC m=+1016.888098779" watchObservedRunningTime="2026-01-27 22:04:24.477243509 +0000 UTC m=+1016.893265208" Jan 27 22:04:27 crc kubenswrapper[4803]: I0127 22:04:27.445088 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mt2x7" event={"ID":"bd2efa75-5c9b-4f23-a284-9f69ae3587af","Type":"ContainerStarted","Data":"13a35414782f99ba85bc749e508fde500a586adf857977b39e8988fc771f1517"} Jan 27 22:04:27 crc kubenswrapper[4803]: I0127 22:04:27.469824 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-mt2x7" podStartSLOduration=2.260863798 podStartE2EDuration="7.469800187s" podCreationTimestamp="2026-01-27 22:04:20 +0000 UTC" firstStartedPulling="2026-01-27 22:04:21.270984601 +0000 UTC m=+1013.687006300" lastFinishedPulling="2026-01-27 22:04:26.47992099 +0000 UTC m=+1018.895942689" observedRunningTime="2026-01-27 22:04:27.466799326 +0000 UTC m=+1019.882821045" watchObservedRunningTime="2026-01-27 22:04:27.469800187 +0000 UTC m=+1019.885821906" Jan 27 22:04:30 crc kubenswrapper[4803]: I0127 22:04:30.862129 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wrzxs" Jan 27 22:04:31 crc kubenswrapper[4803]: I0127 22:04:31.134195 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:31 crc kubenswrapper[4803]: I0127 22:04:31.134447 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:31 crc kubenswrapper[4803]: I0127 22:04:31.139071 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:31 crc kubenswrapper[4803]: I0127 22:04:31.484473 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:04:31 crc kubenswrapper[4803]: I0127 22:04:31.588826 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-8db9b8f74-wdfx9"] Jan 27 22:04:41 crc kubenswrapper[4803]: I0127 22:04:41.087205 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 22:04:56 crc kubenswrapper[4803]: I0127 22:04:56.662779 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-8db9b8f74-wdfx9" podUID="80401bf8-2e71-4abf-83e4-346fd998733d" containerName="console" containerID="cri-o://e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4" gracePeriod=15 Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.118510 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-8db9b8f74-wdfx9_80401bf8-2e71-4abf-83e4-346fd998733d/console/0.log" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.118690 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.196405 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-console-config\") pod \"80401bf8-2e71-4abf-83e4-346fd998733d\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.196461 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-serving-cert\") pod \"80401bf8-2e71-4abf-83e4-346fd998733d\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.196496 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxcg5\" (UniqueName: \"kubernetes.io/projected/80401bf8-2e71-4abf-83e4-346fd998733d-kube-api-access-fxcg5\") pod \"80401bf8-2e71-4abf-83e4-346fd998733d\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.196525 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-oauth-serving-cert\") pod \"80401bf8-2e71-4abf-83e4-346fd998733d\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.196548 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-service-ca\") pod \"80401bf8-2e71-4abf-83e4-346fd998733d\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.196612 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-trusted-ca-bundle\") pod \"80401bf8-2e71-4abf-83e4-346fd998733d\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.196652 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-oauth-config\") pod \"80401bf8-2e71-4abf-83e4-346fd998733d\" (UID: \"80401bf8-2e71-4abf-83e4-346fd998733d\") " Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.197408 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-service-ca" (OuterVolumeSpecName: "service-ca") pod "80401bf8-2e71-4abf-83e4-346fd998733d" (UID: "80401bf8-2e71-4abf-83e4-346fd998733d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.197452 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-console-config" (OuterVolumeSpecName: "console-config") pod "80401bf8-2e71-4abf-83e4-346fd998733d" (UID: "80401bf8-2e71-4abf-83e4-346fd998733d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.197464 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "80401bf8-2e71-4abf-83e4-346fd998733d" (UID: "80401bf8-2e71-4abf-83e4-346fd998733d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.197686 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "80401bf8-2e71-4abf-83e4-346fd998733d" (UID: "80401bf8-2e71-4abf-83e4-346fd998733d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.202897 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80401bf8-2e71-4abf-83e4-346fd998733d-kube-api-access-fxcg5" (OuterVolumeSpecName: "kube-api-access-fxcg5") pod "80401bf8-2e71-4abf-83e4-346fd998733d" (UID: "80401bf8-2e71-4abf-83e4-346fd998733d"). InnerVolumeSpecName "kube-api-access-fxcg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.203079 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "80401bf8-2e71-4abf-83e4-346fd998733d" (UID: "80401bf8-2e71-4abf-83e4-346fd998733d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.203214 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "80401bf8-2e71-4abf-83e4-346fd998733d" (UID: "80401bf8-2e71-4abf-83e4-346fd998733d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.298190 4803 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.298233 4803 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.298248 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxcg5\" (UniqueName: \"kubernetes.io/projected/80401bf8-2e71-4abf-83e4-346fd998733d-kube-api-access-fxcg5\") on node \"crc\" DevicePath \"\"" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.298263 4803 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.298276 4803 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.298288 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80401bf8-2e71-4abf-83e4-346fd998733d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.298299 4803 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80401bf8-2e71-4abf-83e4-346fd998733d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.659759 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-8db9b8f74-wdfx9_80401bf8-2e71-4abf-83e4-346fd998733d/console/0.log" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.659806 4803 generic.go:334] "Generic (PLEG): container finished" podID="80401bf8-2e71-4abf-83e4-346fd998733d" containerID="e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4" exitCode=2 Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.659834 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8db9b8f74-wdfx9" event={"ID":"80401bf8-2e71-4abf-83e4-346fd998733d","Type":"ContainerDied","Data":"e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4"} Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.659887 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8db9b8f74-wdfx9" event={"ID":"80401bf8-2e71-4abf-83e4-346fd998733d","Type":"ContainerDied","Data":"4e42357b57431b5ebf41b2482081e3db26b6a9e6bfa558a0c69c12afe8496fb6"} Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.659904 4803 scope.go:117] "RemoveContainer" containerID="e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.659899 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8db9b8f74-wdfx9" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.680413 4803 scope.go:117] "RemoveContainer" containerID="e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4" Jan 27 22:04:57 crc kubenswrapper[4803]: E0127 22:04:57.681002 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4\": container with ID starting with e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4 not found: ID does not exist" containerID="e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.681044 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4"} err="failed to get container status \"e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4\": rpc error: code = NotFound desc = could not find container \"e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4\": container with ID starting with e5e17b65a6b7a190950c9b5bcbf0668b6572276126910016a5850c877c0688a4 not found: ID does not exist" Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.697732 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-8db9b8f74-wdfx9"] Jan 27 22:04:57 crc kubenswrapper[4803]: I0127 22:04:57.704150 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-8db9b8f74-wdfx9"] Jan 27 22:04:58 crc kubenswrapper[4803]: I0127 22:04:58.349308 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80401bf8-2e71-4abf-83e4-346fd998733d" path="/var/lib/kubelet/pods/80401bf8-2e71-4abf-83e4-346fd998733d/volumes" Jan 27 22:04:58 crc kubenswrapper[4803]: I0127 22:04:58.849294 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8"] Jan 27 22:04:58 crc kubenswrapper[4803]: E0127 22:04:58.851002 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80401bf8-2e71-4abf-83e4-346fd998733d" containerName="console" Jan 27 22:04:58 crc kubenswrapper[4803]: I0127 22:04:58.851092 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="80401bf8-2e71-4abf-83e4-346fd998733d" containerName="console" Jan 27 22:04:58 crc kubenswrapper[4803]: I0127 22:04:58.851343 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="80401bf8-2e71-4abf-83e4-346fd998733d" containerName="console" Jan 27 22:04:58 crc kubenswrapper[4803]: I0127 22:04:58.852462 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:04:58 crc kubenswrapper[4803]: I0127 22:04:58.854428 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 22:04:58 crc kubenswrapper[4803]: I0127 22:04:58.856144 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8"] Jan 27 22:04:58 crc kubenswrapper[4803]: I0127 22:04:58.945595 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:04:58 crc kubenswrapper[4803]: I0127 22:04:58.945927 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:04:58 crc kubenswrapper[4803]: I0127 22:04:58.946021 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-484zr\" (UniqueName: \"kubernetes.io/projected/7026d76e-2c5e-4740-98c4-76c8f672f6c9-kube-api-access-484zr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:04:59 crc kubenswrapper[4803]: I0127 22:04:59.047365 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-484zr\" (UniqueName: \"kubernetes.io/projected/7026d76e-2c5e-4740-98c4-76c8f672f6c9-kube-api-access-484zr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:04:59 crc kubenswrapper[4803]: I0127 22:04:59.047812 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:04:59 crc kubenswrapper[4803]: I0127 22:04:59.048164 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:04:59 crc kubenswrapper[4803]: I0127 22:04:59.048296 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:04:59 crc kubenswrapper[4803]: I0127 22:04:59.048672 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:04:59 crc kubenswrapper[4803]: I0127 22:04:59.068900 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-484zr\" (UniqueName: \"kubernetes.io/projected/7026d76e-2c5e-4740-98c4-76c8f672f6c9-kube-api-access-484zr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:04:59 crc kubenswrapper[4803]: I0127 22:04:59.167588 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:04:59 crc kubenswrapper[4803]: I0127 22:04:59.562525 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8"] Jan 27 22:04:59 crc kubenswrapper[4803]: I0127 22:04:59.675853 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" event={"ID":"7026d76e-2c5e-4740-98c4-76c8f672f6c9","Type":"ContainerStarted","Data":"b507f3871637764acf18d0c4554f6735116f2ba23b41ccb6e2ae62a935e1ab9d"} Jan 27 22:05:00 crc kubenswrapper[4803]: I0127 22:05:00.682569 4803 generic.go:334] "Generic (PLEG): container finished" podID="7026d76e-2c5e-4740-98c4-76c8f672f6c9" containerID="882b265afde70e7b7c3d5c855c9b5f768f0171dfe8ba01c8a4ffe4e385238f0b" exitCode=0 Jan 27 22:05:00 crc kubenswrapper[4803]: I0127 22:05:00.682619 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" event={"ID":"7026d76e-2c5e-4740-98c4-76c8f672f6c9","Type":"ContainerDied","Data":"882b265afde70e7b7c3d5c855c9b5f768f0171dfe8ba01c8a4ffe4e385238f0b"} Jan 27 22:05:00 crc kubenswrapper[4803]: I0127 22:05:00.684392 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 22:05:02 crc kubenswrapper[4803]: I0127 22:05:02.713605 4803 generic.go:334] "Generic (PLEG): container finished" podID="7026d76e-2c5e-4740-98c4-76c8f672f6c9" containerID="ff66a44e68a87b8de43d3e8c8d44112d8cbaf7a38e7f72b089f6be30ee667886" exitCode=0 Jan 27 22:05:02 crc kubenswrapper[4803]: I0127 22:05:02.714025 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" event={"ID":"7026d76e-2c5e-4740-98c4-76c8f672f6c9","Type":"ContainerDied","Data":"ff66a44e68a87b8de43d3e8c8d44112d8cbaf7a38e7f72b089f6be30ee667886"} Jan 27 22:05:03 crc kubenswrapper[4803]: I0127 22:05:03.731441 4803 generic.go:334] "Generic (PLEG): container finished" podID="7026d76e-2c5e-4740-98c4-76c8f672f6c9" containerID="6e775556cb0843b24a348e1feac3f5ffe018f4fdfc68048bbf109ed5f6fe322e" exitCode=0 Jan 27 22:05:03 crc kubenswrapper[4803]: I0127 22:05:03.731488 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" event={"ID":"7026d76e-2c5e-4740-98c4-76c8f672f6c9","Type":"ContainerDied","Data":"6e775556cb0843b24a348e1feac3f5ffe018f4fdfc68048bbf109ed5f6fe322e"} Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.076999 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.161528 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-util\") pod \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.161713 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-484zr\" (UniqueName: \"kubernetes.io/projected/7026d76e-2c5e-4740-98c4-76c8f672f6c9-kube-api-access-484zr\") pod \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.161745 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-bundle\") pod \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\" (UID: \"7026d76e-2c5e-4740-98c4-76c8f672f6c9\") " Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.162688 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-bundle" (OuterVolumeSpecName: "bundle") pod "7026d76e-2c5e-4740-98c4-76c8f672f6c9" (UID: "7026d76e-2c5e-4740-98c4-76c8f672f6c9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.166759 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7026d76e-2c5e-4740-98c4-76c8f672f6c9-kube-api-access-484zr" (OuterVolumeSpecName: "kube-api-access-484zr") pod "7026d76e-2c5e-4740-98c4-76c8f672f6c9" (UID: "7026d76e-2c5e-4740-98c4-76c8f672f6c9"). InnerVolumeSpecName "kube-api-access-484zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.174706 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-util" (OuterVolumeSpecName: "util") pod "7026d76e-2c5e-4740-98c4-76c8f672f6c9" (UID: "7026d76e-2c5e-4740-98c4-76c8f672f6c9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.278209 4803 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-util\") on node \"crc\" DevicePath \"\"" Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.278258 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-484zr\" (UniqueName: \"kubernetes.io/projected/7026d76e-2c5e-4740-98c4-76c8f672f6c9-kube-api-access-484zr\") on node \"crc\" DevicePath \"\"" Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.278273 4803 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7026d76e-2c5e-4740-98c4-76c8f672f6c9-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.749188 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" event={"ID":"7026d76e-2c5e-4740-98c4-76c8f672f6c9","Type":"ContainerDied","Data":"b507f3871637764acf18d0c4554f6735116f2ba23b41ccb6e2ae62a935e1ab9d"} Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.749229 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b507f3871637764acf18d0c4554f6735116f2ba23b41ccb6e2ae62a935e1ab9d" Jan 27 22:05:05 crc kubenswrapper[4803]: I0127 22:05:05.749239 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.924360 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb"] Jan 27 22:05:13 crc kubenswrapper[4803]: E0127 22:05:13.925159 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7026d76e-2c5e-4740-98c4-76c8f672f6c9" containerName="util" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.925173 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="7026d76e-2c5e-4740-98c4-76c8f672f6c9" containerName="util" Jan 27 22:05:13 crc kubenswrapper[4803]: E0127 22:05:13.925183 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7026d76e-2c5e-4740-98c4-76c8f672f6c9" containerName="extract" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.925189 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="7026d76e-2c5e-4740-98c4-76c8f672f6c9" containerName="extract" Jan 27 22:05:13 crc kubenswrapper[4803]: E0127 22:05:13.925215 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7026d76e-2c5e-4740-98c4-76c8f672f6c9" containerName="pull" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.925221 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="7026d76e-2c5e-4740-98c4-76c8f672f6c9" containerName="pull" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.925340 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="7026d76e-2c5e-4740-98c4-76c8f672f6c9" containerName="extract" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.925885 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.927633 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.927700 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.927824 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.927943 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-rstqr" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.928882 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 22:05:13 crc kubenswrapper[4803]: I0127 22:05:13.951838 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb"] Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.023310 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x47rn\" (UniqueName: \"kubernetes.io/projected/2beb4659-d63e-495f-a32f-f94cbcbbc1ce-kube-api-access-x47rn\") pod \"metallb-operator-controller-manager-848cc4d96f-sx8xb\" (UID: \"2beb4659-d63e-495f-a32f-f94cbcbbc1ce\") " pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.023421 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2beb4659-d63e-495f-a32f-f94cbcbbc1ce-apiservice-cert\") pod \"metallb-operator-controller-manager-848cc4d96f-sx8xb\" (UID: \"2beb4659-d63e-495f-a32f-f94cbcbbc1ce\") " pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.023442 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2beb4659-d63e-495f-a32f-f94cbcbbc1ce-webhook-cert\") pod \"metallb-operator-controller-manager-848cc4d96f-sx8xb\" (UID: \"2beb4659-d63e-495f-a32f-f94cbcbbc1ce\") " pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.124637 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x47rn\" (UniqueName: \"kubernetes.io/projected/2beb4659-d63e-495f-a32f-f94cbcbbc1ce-kube-api-access-x47rn\") pod \"metallb-operator-controller-manager-848cc4d96f-sx8xb\" (UID: \"2beb4659-d63e-495f-a32f-f94cbcbbc1ce\") " pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.124967 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2beb4659-d63e-495f-a32f-f94cbcbbc1ce-apiservice-cert\") pod \"metallb-operator-controller-manager-848cc4d96f-sx8xb\" (UID: \"2beb4659-d63e-495f-a32f-f94cbcbbc1ce\") " pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.125094 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2beb4659-d63e-495f-a32f-f94cbcbbc1ce-webhook-cert\") pod \"metallb-operator-controller-manager-848cc4d96f-sx8xb\" (UID: \"2beb4659-d63e-495f-a32f-f94cbcbbc1ce\") " pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.133689 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2beb4659-d63e-495f-a32f-f94cbcbbc1ce-apiservice-cert\") pod \"metallb-operator-controller-manager-848cc4d96f-sx8xb\" (UID: \"2beb4659-d63e-495f-a32f-f94cbcbbc1ce\") " pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.146833 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2beb4659-d63e-495f-a32f-f94cbcbbc1ce-webhook-cert\") pod \"metallb-operator-controller-manager-848cc4d96f-sx8xb\" (UID: \"2beb4659-d63e-495f-a32f-f94cbcbbc1ce\") " pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.152548 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x47rn\" (UniqueName: \"kubernetes.io/projected/2beb4659-d63e-495f-a32f-f94cbcbbc1ce-kube-api-access-x47rn\") pod \"metallb-operator-controller-manager-848cc4d96f-sx8xb\" (UID: \"2beb4659-d63e-495f-a32f-f94cbcbbc1ce\") " pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.239719 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-86894678c6-4f29p"] Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.240826 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.241542 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.243023 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.243040 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nq5sl" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.243338 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.271108 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86894678c6-4f29p"] Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.339680 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/038e0b5a-3e3b-462b-83ca-c9865b6f4240-apiservice-cert\") pod \"metallb-operator-webhook-server-86894678c6-4f29p\" (UID: \"038e0b5a-3e3b-462b-83ca-c9865b6f4240\") " pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.339773 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/038e0b5a-3e3b-462b-83ca-c9865b6f4240-webhook-cert\") pod \"metallb-operator-webhook-server-86894678c6-4f29p\" (UID: \"038e0b5a-3e3b-462b-83ca-c9865b6f4240\") " pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.339802 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4pv\" (UniqueName: \"kubernetes.io/projected/038e0b5a-3e3b-462b-83ca-c9865b6f4240-kube-api-access-7w4pv\") pod \"metallb-operator-webhook-server-86894678c6-4f29p\" (UID: \"038e0b5a-3e3b-462b-83ca-c9865b6f4240\") " pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.455886 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/038e0b5a-3e3b-462b-83ca-c9865b6f4240-apiservice-cert\") pod \"metallb-operator-webhook-server-86894678c6-4f29p\" (UID: \"038e0b5a-3e3b-462b-83ca-c9865b6f4240\") " pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.455955 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/038e0b5a-3e3b-462b-83ca-c9865b6f4240-webhook-cert\") pod \"metallb-operator-webhook-server-86894678c6-4f29p\" (UID: \"038e0b5a-3e3b-462b-83ca-c9865b6f4240\") " pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.455988 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4pv\" (UniqueName: \"kubernetes.io/projected/038e0b5a-3e3b-462b-83ca-c9865b6f4240-kube-api-access-7w4pv\") pod \"metallb-operator-webhook-server-86894678c6-4f29p\" (UID: \"038e0b5a-3e3b-462b-83ca-c9865b6f4240\") " pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.461931 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/038e0b5a-3e3b-462b-83ca-c9865b6f4240-apiservice-cert\") pod \"metallb-operator-webhook-server-86894678c6-4f29p\" (UID: \"038e0b5a-3e3b-462b-83ca-c9865b6f4240\") " pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.462372 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/038e0b5a-3e3b-462b-83ca-c9865b6f4240-webhook-cert\") pod \"metallb-operator-webhook-server-86894678c6-4f29p\" (UID: \"038e0b5a-3e3b-462b-83ca-c9865b6f4240\") " pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.491506 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4pv\" (UniqueName: \"kubernetes.io/projected/038e0b5a-3e3b-462b-83ca-c9865b6f4240-kube-api-access-7w4pv\") pod \"metallb-operator-webhook-server-86894678c6-4f29p\" (UID: \"038e0b5a-3e3b-462b-83ca-c9865b6f4240\") " pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.555458 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.778320 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb"] Jan 27 22:05:14 crc kubenswrapper[4803]: I0127 22:05:14.811205 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" event={"ID":"2beb4659-d63e-495f-a32f-f94cbcbbc1ce","Type":"ContainerStarted","Data":"c4606406977a6085d857da2e176d3a958397be85a861ca660c4fa73598e09c3d"} Jan 27 22:05:15 crc kubenswrapper[4803]: I0127 22:05:15.019912 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86894678c6-4f29p"] Jan 27 22:05:15 crc kubenswrapper[4803]: W0127 22:05:15.030471 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod038e0b5a_3e3b_462b_83ca_c9865b6f4240.slice/crio-3f5b5ea9d24cf515f516debeb291b00990c01897669f4875903f1aaa48a4b3d4 WatchSource:0}: Error finding container 3f5b5ea9d24cf515f516debeb291b00990c01897669f4875903f1aaa48a4b3d4: Status 404 returned error can't find the container with id 3f5b5ea9d24cf515f516debeb291b00990c01897669f4875903f1aaa48a4b3d4 Jan 27 22:05:15 crc kubenswrapper[4803]: I0127 22:05:15.819044 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" event={"ID":"038e0b5a-3e3b-462b-83ca-c9865b6f4240","Type":"ContainerStarted","Data":"3f5b5ea9d24cf515f516debeb291b00990c01897669f4875903f1aaa48a4b3d4"} Jan 27 22:05:16 crc kubenswrapper[4803]: I0127 22:05:16.343706 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:05:16 crc kubenswrapper[4803]: I0127 22:05:16.343784 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:05:18 crc kubenswrapper[4803]: I0127 22:05:18.846976 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" event={"ID":"2beb4659-d63e-495f-a32f-f94cbcbbc1ce","Type":"ContainerStarted","Data":"9beec0dcd921f5de25004b6333c4745beacfaa117e7da813df6887bdf043a19e"} Jan 27 22:05:18 crc kubenswrapper[4803]: I0127 22:05:18.847527 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:19 crc kubenswrapper[4803]: I0127 22:05:19.855216 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" event={"ID":"038e0b5a-3e3b-462b-83ca-c9865b6f4240","Type":"ContainerStarted","Data":"36addb28749ee510ca1933290c9ef068a58c6a9b2265b87526943933882b0385"} Jan 27 22:05:19 crc kubenswrapper[4803]: I0127 22:05:19.855512 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:19 crc kubenswrapper[4803]: I0127 22:05:19.875964 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" podStartSLOduration=3.932757321 podStartE2EDuration="6.875945026s" podCreationTimestamp="2026-01-27 22:05:13 +0000 UTC" firstStartedPulling="2026-01-27 22:05:14.784065058 +0000 UTC m=+1067.200086767" lastFinishedPulling="2026-01-27 22:05:17.727252773 +0000 UTC m=+1070.143274472" observedRunningTime="2026-01-27 22:05:18.875254539 +0000 UTC m=+1071.291276238" watchObservedRunningTime="2026-01-27 22:05:19.875945026 +0000 UTC m=+1072.291966725" Jan 27 22:05:19 crc kubenswrapper[4803]: I0127 22:05:19.876416 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" podStartSLOduration=1.281801591 podStartE2EDuration="5.876409178s" podCreationTimestamp="2026-01-27 22:05:14 +0000 UTC" firstStartedPulling="2026-01-27 22:05:15.037746868 +0000 UTC m=+1067.453768577" lastFinishedPulling="2026-01-27 22:05:19.632354465 +0000 UTC m=+1072.048376164" observedRunningTime="2026-01-27 22:05:19.873869289 +0000 UTC m=+1072.289890998" watchObservedRunningTime="2026-01-27 22:05:19.876409178 +0000 UTC m=+1072.292430877" Jan 27 22:05:34 crc kubenswrapper[4803]: I0127 22:05:34.560470 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 22:05:46 crc kubenswrapper[4803]: I0127 22:05:46.343662 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:05:46 crc kubenswrapper[4803]: I0127 22:05:46.344533 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:05:54 crc kubenswrapper[4803]: I0127 22:05:54.244048 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.017261 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jsxr8"] Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.020221 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.029736 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d"] Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.034134 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cbshq" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.034432 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.034667 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.037906 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.039937 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.047693 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d"] Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.107074 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-metrics\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.107136 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-metrics-certs\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.107154 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-frr-startup\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.107196 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw679\" (UniqueName: \"kubernetes.io/projected/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-kube-api-access-nw679\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.107215 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-reloader\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.107255 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-frr-sockets\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.107284 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-frr-conf\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.129523 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-p9fmz"] Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.130670 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.132579 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.132709 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pft24" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.132579 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.133450 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.153216 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-2nc8h"] Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.154693 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.156820 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.179788 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-2nc8h"] Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208382 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4mz\" (UniqueName: \"kubernetes.io/projected/669fa453-18c2-4202-9ac3-117b6f000063-kube-api-access-sg4mz\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208421 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-metrics-certs\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208449 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-frr-sockets\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208484 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc45p\" (UniqueName: \"kubernetes.io/projected/ceff729d-b83b-45b4-99ef-d11ef9570efb-kube-api-access-kc45p\") pod \"frr-k8s-webhook-server-7df86c4f6c-tl69d\" (UID: \"ceff729d-b83b-45b4-99ef-d11ef9570efb\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208503 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-frr-conf\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208604 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-metrics\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208625 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ceff729d-b83b-45b4-99ef-d11ef9570efb-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-tl69d\" (UID: \"ceff729d-b83b-45b4-99ef-d11ef9570efb\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208650 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-memberlist\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208668 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/669fa453-18c2-4202-9ac3-117b6f000063-metallb-excludel2\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208691 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-metrics-certs\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208707 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-frr-startup\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208745 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw679\" (UniqueName: \"kubernetes.io/projected/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-kube-api-access-nw679\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.208764 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-reloader\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.209205 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-reloader\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.209394 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-frr-sockets\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.209581 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-frr-conf\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: E0127 22:05:55.209681 4803 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 27 22:05:55 crc kubenswrapper[4803]: E0127 22:05:55.209728 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-metrics-certs podName:0f079c02-e2f3-4dc3-aad2-86c70d3d41e8 nodeName:}" failed. No retries permitted until 2026-01-27 22:05:55.709712832 +0000 UTC m=+1108.125734531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-metrics-certs") pod "frr-k8s-jsxr8" (UID: "0f079c02-e2f3-4dc3-aad2-86c70d3d41e8") : secret "frr-k8s-certs-secret" not found Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.209938 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-metrics\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.210617 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-frr-startup\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.248262 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw679\" (UniqueName: \"kubernetes.io/projected/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-kube-api-access-nw679\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.310252 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-memberlist\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.310297 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/669fa453-18c2-4202-9ac3-117b6f000063-metallb-excludel2\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.310375 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjsj\" (UniqueName: \"kubernetes.io/projected/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-kube-api-access-9vjsj\") pod \"controller-6968d8fdc4-2nc8h\" (UID: \"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c\") " pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.310449 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg4mz\" (UniqueName: \"kubernetes.io/projected/669fa453-18c2-4202-9ac3-117b6f000063-kube-api-access-sg4mz\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.310476 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-metrics-certs\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: E0127 22:05:55.310473 4803 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.310513 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-metrics-certs\") pod \"controller-6968d8fdc4-2nc8h\" (UID: \"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c\") " pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:55 crc kubenswrapper[4803]: E0127 22:05:55.310554 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-memberlist podName:669fa453-18c2-4202-9ac3-117b6f000063 nodeName:}" failed. No retries permitted until 2026-01-27 22:05:55.810534529 +0000 UTC m=+1108.226556228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-memberlist") pod "speaker-p9fmz" (UID: "669fa453-18c2-4202-9ac3-117b6f000063") : secret "metallb-memberlist" not found Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.310586 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc45p\" (UniqueName: \"kubernetes.io/projected/ceff729d-b83b-45b4-99ef-d11ef9570efb-kube-api-access-kc45p\") pod \"frr-k8s-webhook-server-7df86c4f6c-tl69d\" (UID: \"ceff729d-b83b-45b4-99ef-d11ef9570efb\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 22:05:55 crc kubenswrapper[4803]: E0127 22:05:55.310629 4803 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.310662 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-cert\") pod \"controller-6968d8fdc4-2nc8h\" (UID: \"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c\") " pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:55 crc kubenswrapper[4803]: E0127 22:05:55.310696 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-metrics-certs podName:669fa453-18c2-4202-9ac3-117b6f000063 nodeName:}" failed. No retries permitted until 2026-01-27 22:05:55.810675602 +0000 UTC m=+1108.226697301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-metrics-certs") pod "speaker-p9fmz" (UID: "669fa453-18c2-4202-9ac3-117b6f000063") : secret "speaker-certs-secret" not found Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.310715 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ceff729d-b83b-45b4-99ef-d11ef9570efb-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-tl69d\" (UID: \"ceff729d-b83b-45b4-99ef-d11ef9570efb\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.311618 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/669fa453-18c2-4202-9ac3-117b6f000063-metallb-excludel2\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.317185 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ceff729d-b83b-45b4-99ef-d11ef9570efb-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-tl69d\" (UID: \"ceff729d-b83b-45b4-99ef-d11ef9570efb\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.329502 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc45p\" (UniqueName: \"kubernetes.io/projected/ceff729d-b83b-45b4-99ef-d11ef9570efb-kube-api-access-kc45p\") pod \"frr-k8s-webhook-server-7df86c4f6c-tl69d\" (UID: \"ceff729d-b83b-45b4-99ef-d11ef9570efb\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.330148 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg4mz\" (UniqueName: \"kubernetes.io/projected/669fa453-18c2-4202-9ac3-117b6f000063-kube-api-access-sg4mz\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.369533 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.412683 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vjsj\" (UniqueName: \"kubernetes.io/projected/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-kube-api-access-9vjsj\") pod \"controller-6968d8fdc4-2nc8h\" (UID: \"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c\") " pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.412790 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-metrics-certs\") pod \"controller-6968d8fdc4-2nc8h\" (UID: \"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c\") " pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.412838 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-cert\") pod \"controller-6968d8fdc4-2nc8h\" (UID: \"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c\") " pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:55 crc kubenswrapper[4803]: E0127 22:05:55.412981 4803 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 27 22:05:55 crc kubenswrapper[4803]: E0127 22:05:55.413084 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-metrics-certs podName:802fd9e5-a4c1-4195-b95a-e8fde55cbe1c nodeName:}" failed. No retries permitted until 2026-01-27 22:05:55.913053761 +0000 UTC m=+1108.329075460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-metrics-certs") pod "controller-6968d8fdc4-2nc8h" (UID: "802fd9e5-a4c1-4195-b95a-e8fde55cbe1c") : secret "controller-certs-secret" not found Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.415491 4803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.427376 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-cert\") pod \"controller-6968d8fdc4-2nc8h\" (UID: \"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c\") " pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.432460 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vjsj\" (UniqueName: \"kubernetes.io/projected/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-kube-api-access-9vjsj\") pod \"controller-6968d8fdc4-2nc8h\" (UID: \"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c\") " pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.718777 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-metrics-certs\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.722190 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f079c02-e2f3-4dc3-aad2-86c70d3d41e8-metrics-certs\") pod \"frr-k8s-jsxr8\" (UID: \"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8\") " pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.780442 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d"] Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.820757 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-memberlist\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.820875 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-metrics-certs\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: E0127 22:05:55.821162 4803 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 22:05:55 crc kubenswrapper[4803]: E0127 22:05:55.821239 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-memberlist podName:669fa453-18c2-4202-9ac3-117b6f000063 nodeName:}" failed. No retries permitted until 2026-01-27 22:05:56.82122154 +0000 UTC m=+1109.237243239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-memberlist") pod "speaker-p9fmz" (UID: "669fa453-18c2-4202-9ac3-117b6f000063") : secret "metallb-memberlist" not found Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.826420 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-metrics-certs\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.921891 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-metrics-certs\") pod \"controller-6968d8fdc4-2nc8h\" (UID: \"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c\") " pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.926538 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/802fd9e5-a4c1-4195-b95a-e8fde55cbe1c-metrics-certs\") pod \"controller-6968d8fdc4-2nc8h\" (UID: \"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c\") " pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:55 crc kubenswrapper[4803]: I0127 22:05:55.940739 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:05:56 crc kubenswrapper[4803]: I0127 22:05:56.073904 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:56 crc kubenswrapper[4803]: I0127 22:05:56.229519 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" event={"ID":"ceff729d-b83b-45b4-99ef-d11ef9570efb","Type":"ContainerStarted","Data":"6db541b3249c41fc981739dd549776dfa23365ee67c35c6a1bf26b09b1b81ded"} Jan 27 22:05:56 crc kubenswrapper[4803]: I0127 22:05:56.238598 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerStarted","Data":"cfadccb95be70921eef8edd37af4ef03360c828573b79e68dd088b63a1067639"} Jan 27 22:05:56 crc kubenswrapper[4803]: I0127 22:05:56.524154 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-2nc8h"] Jan 27 22:05:56 crc kubenswrapper[4803]: W0127 22:05:56.524405 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod802fd9e5_a4c1_4195_b95a_e8fde55cbe1c.slice/crio-1804d23146286b0deb610460d50e15bc82977d5aa24fffc9ca1e76b1405f6beb WatchSource:0}: Error finding container 1804d23146286b0deb610460d50e15bc82977d5aa24fffc9ca1e76b1405f6beb: Status 404 returned error can't find the container with id 1804d23146286b0deb610460d50e15bc82977d5aa24fffc9ca1e76b1405f6beb Jan 27 22:05:56 crc kubenswrapper[4803]: I0127 22:05:56.844558 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-memberlist\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:56 crc kubenswrapper[4803]: I0127 22:05:56.850001 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/669fa453-18c2-4202-9ac3-117b6f000063-memberlist\") pod \"speaker-p9fmz\" (UID: \"669fa453-18c2-4202-9ac3-117b6f000063\") " pod="metallb-system/speaker-p9fmz" Jan 27 22:05:56 crc kubenswrapper[4803]: I0127 22:05:56.944359 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-p9fmz" Jan 27 22:05:56 crc kubenswrapper[4803]: W0127 22:05:56.963464 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod669fa453_18c2_4202_9ac3_117b6f000063.slice/crio-b09172228fbb456a3d54d48656a35cf859213ce75faf1c287fafe73674f0ac7e WatchSource:0}: Error finding container b09172228fbb456a3d54d48656a35cf859213ce75faf1c287fafe73674f0ac7e: Status 404 returned error can't find the container with id b09172228fbb456a3d54d48656a35cf859213ce75faf1c287fafe73674f0ac7e Jan 27 22:05:57 crc kubenswrapper[4803]: I0127 22:05:57.247134 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-p9fmz" event={"ID":"669fa453-18c2-4202-9ac3-117b6f000063","Type":"ContainerStarted","Data":"8daabcc4acdce99c39b992edbea95c261698507c00f768b1ab94bc7803021c01"} Jan 27 22:05:57 crc kubenswrapper[4803]: I0127 22:05:57.247188 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-p9fmz" event={"ID":"669fa453-18c2-4202-9ac3-117b6f000063","Type":"ContainerStarted","Data":"b09172228fbb456a3d54d48656a35cf859213ce75faf1c287fafe73674f0ac7e"} Jan 27 22:05:57 crc kubenswrapper[4803]: I0127 22:05:57.249353 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-2nc8h" event={"ID":"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c","Type":"ContainerStarted","Data":"1acd2de88ca5ffbb6493a63cbaa70500b4953f0d6717f9764b97950e2a9c608b"} Jan 27 22:05:57 crc kubenswrapper[4803]: I0127 22:05:57.249383 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-2nc8h" event={"ID":"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c","Type":"ContainerStarted","Data":"a541ca6174cb0577362530655d93e2c9ec252b482a24d614bbeeabbcf24b44ce"} Jan 27 22:05:57 crc kubenswrapper[4803]: I0127 22:05:57.249397 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-2nc8h" event={"ID":"802fd9e5-a4c1-4195-b95a-e8fde55cbe1c","Type":"ContainerStarted","Data":"1804d23146286b0deb610460d50e15bc82977d5aa24fffc9ca1e76b1405f6beb"} Jan 27 22:05:57 crc kubenswrapper[4803]: I0127 22:05:57.249547 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:05:57 crc kubenswrapper[4803]: I0127 22:05:57.272044 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-2nc8h" podStartSLOduration=2.272027725 podStartE2EDuration="2.272027725s" podCreationTimestamp="2026-01-27 22:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:05:57.26627365 +0000 UTC m=+1109.682295349" watchObservedRunningTime="2026-01-27 22:05:57.272027725 +0000 UTC m=+1109.688049424" Jan 27 22:05:58 crc kubenswrapper[4803]: I0127 22:05:58.259981 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-p9fmz" event={"ID":"669fa453-18c2-4202-9ac3-117b6f000063","Type":"ContainerStarted","Data":"eb27410f3665222637c08fcdb975508e4b8097ed7d57d73866a65cadb82c478e"} Jan 27 22:05:58 crc kubenswrapper[4803]: I0127 22:05:58.260278 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-p9fmz" Jan 27 22:05:58 crc kubenswrapper[4803]: I0127 22:05:58.279801 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-p9fmz" podStartSLOduration=3.279783851 podStartE2EDuration="3.279783851s" podCreationTimestamp="2026-01-27 22:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:05:58.273361018 +0000 UTC m=+1110.689382727" watchObservedRunningTime="2026-01-27 22:05:58.279783851 +0000 UTC m=+1110.695805550" Jan 27 22:06:06 crc kubenswrapper[4803]: I0127 22:06:06.079230 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-2nc8h" Jan 27 22:06:06 crc kubenswrapper[4803]: I0127 22:06:06.342197 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" event={"ID":"ceff729d-b83b-45b4-99ef-d11ef9570efb","Type":"ContainerStarted","Data":"5b12992b803de9e1b315d60a241173e03758c3ba53973d8bdeeb283abbc8275a"} Jan 27 22:06:06 crc kubenswrapper[4803]: I0127 22:06:06.342254 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 22:06:06 crc kubenswrapper[4803]: I0127 22:06:06.344150 4803 generic.go:334] "Generic (PLEG): container finished" podID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerID="ac259479c6aaf7231b98ea8bdd9dd328faaca787845a6d0375fa452a8d399192" exitCode=0 Jan 27 22:06:06 crc kubenswrapper[4803]: I0127 22:06:06.344198 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerDied","Data":"ac259479c6aaf7231b98ea8bdd9dd328faaca787845a6d0375fa452a8d399192"} Jan 27 22:06:06 crc kubenswrapper[4803]: I0127 22:06:06.363091 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" podStartSLOduration=1.830773355 podStartE2EDuration="11.363072832s" podCreationTimestamp="2026-01-27 22:05:55 +0000 UTC" firstStartedPulling="2026-01-27 22:05:55.789485635 +0000 UTC m=+1108.205507334" lastFinishedPulling="2026-01-27 22:06:05.321785112 +0000 UTC m=+1117.737806811" observedRunningTime="2026-01-27 22:06:06.356985808 +0000 UTC m=+1118.773007517" watchObservedRunningTime="2026-01-27 22:06:06.363072832 +0000 UTC m=+1118.779094531" Jan 27 22:06:07 crc kubenswrapper[4803]: I0127 22:06:07.359777 4803 generic.go:334] "Generic (PLEG): container finished" podID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerID="1b62a42641f29ea9d75f98cb0cc3d0acb8617ef69c16722cae595cefd95eaad2" exitCode=0 Jan 27 22:06:07 crc kubenswrapper[4803]: I0127 22:06:07.359934 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerDied","Data":"1b62a42641f29ea9d75f98cb0cc3d0acb8617ef69c16722cae595cefd95eaad2"} Jan 27 22:06:08 crc kubenswrapper[4803]: I0127 22:06:08.372240 4803 generic.go:334] "Generic (PLEG): container finished" podID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerID="7d00d409250c5d7b294ed9bf7a3cb094e6e88a4c8907684889d1eb74e2cc6758" exitCode=0 Jan 27 22:06:08 crc kubenswrapper[4803]: I0127 22:06:08.372350 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerDied","Data":"7d00d409250c5d7b294ed9bf7a3cb094e6e88a4c8907684889d1eb74e2cc6758"} Jan 27 22:06:09 crc kubenswrapper[4803]: I0127 22:06:09.384627 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerStarted","Data":"5580c1165367ca6581ea46421a55e784f1847c3380dd412a3f534fd9127dad8e"} Jan 27 22:06:09 crc kubenswrapper[4803]: I0127 22:06:09.385091 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerStarted","Data":"9854e0806088049915079878db03c10eac5e36ab3d0882b3afd6f2c532ca4462"} Jan 27 22:06:09 crc kubenswrapper[4803]: I0127 22:06:09.385107 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerStarted","Data":"0b81d060bcaffd780dfdd89fda9d47cb92319d291a41e36a4020b05b199e8262"} Jan 27 22:06:09 crc kubenswrapper[4803]: I0127 22:06:09.385120 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerStarted","Data":"6a74dd1430ece5cbf4721aa93949fb5fbf67b71d4900faa0b21496b2bacfd72e"} Jan 27 22:06:09 crc kubenswrapper[4803]: I0127 22:06:09.385131 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerStarted","Data":"66c47e501dec82dfdaca29b5e31eb6b0bc321e1ca7f4e54e92ff3c5ea0a160b2"} Jan 27 22:06:10 crc kubenswrapper[4803]: I0127 22:06:10.396205 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerStarted","Data":"bf0a8a0a8f0d9d0c314242382e78b6a5fb95828a7eaf137cd83b39d41bc371fe"} Jan 27 22:06:10 crc kubenswrapper[4803]: I0127 22:06:10.396408 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:06:10 crc kubenswrapper[4803]: I0127 22:06:10.428592 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jsxr8" podStartSLOduration=7.163153489 podStartE2EDuration="16.428572095s" podCreationTimestamp="2026-01-27 22:05:54 +0000 UTC" firstStartedPulling="2026-01-27 22:05:56.083988511 +0000 UTC m=+1108.500010210" lastFinishedPulling="2026-01-27 22:06:05.349407117 +0000 UTC m=+1117.765428816" observedRunningTime="2026-01-27 22:06:10.424377743 +0000 UTC m=+1122.840399442" watchObservedRunningTime="2026-01-27 22:06:10.428572095 +0000 UTC m=+1122.844593794" Jan 27 22:06:10 crc kubenswrapper[4803]: I0127 22:06:10.941053 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:06:10 crc kubenswrapper[4803]: I0127 22:06:10.975399 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:06:15 crc kubenswrapper[4803]: I0127 22:06:15.375355 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 22:06:16 crc kubenswrapper[4803]: I0127 22:06:16.343650 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:06:16 crc kubenswrapper[4803]: I0127 22:06:16.343715 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:06:16 crc kubenswrapper[4803]: I0127 22:06:16.343763 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:06:16 crc kubenswrapper[4803]: I0127 22:06:16.344637 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5bed6f52f57219858cf339986b99dcfe79ad6cdcbe8912b0cb981f2d60d0415"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:06:16 crc kubenswrapper[4803]: I0127 22:06:16.344724 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://a5bed6f52f57219858cf339986b99dcfe79ad6cdcbe8912b0cb981f2d60d0415" gracePeriod=600 Jan 27 22:06:16 crc kubenswrapper[4803]: I0127 22:06:16.478141 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="a5bed6f52f57219858cf339986b99dcfe79ad6cdcbe8912b0cb981f2d60d0415" exitCode=0 Jan 27 22:06:16 crc kubenswrapper[4803]: I0127 22:06:16.478183 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"a5bed6f52f57219858cf339986b99dcfe79ad6cdcbe8912b0cb981f2d60d0415"} Jan 27 22:06:16 crc kubenswrapper[4803]: I0127 22:06:16.478213 4803 scope.go:117] "RemoveContainer" containerID="b9f834f520954d1f715c48108c608cf768b5ff78d5b3a0ccfc176c140c448267" Jan 27 22:06:16 crc kubenswrapper[4803]: I0127 22:06:16.948788 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-p9fmz" Jan 27 22:06:17 crc kubenswrapper[4803]: I0127 22:06:17.490176 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"44535dae9f522c885b28c5811071a2781a43938af387dee7b52c5fee20b7bdeb"} Jan 27 22:06:21 crc kubenswrapper[4803]: I0127 22:06:21.484144 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kbt97"] Jan 27 22:06:21 crc kubenswrapper[4803]: I0127 22:06:21.485628 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kbt97" Jan 27 22:06:21 crc kubenswrapper[4803]: I0127 22:06:21.488383 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 22:06:21 crc kubenswrapper[4803]: I0127 22:06:21.488594 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 22:06:21 crc kubenswrapper[4803]: I0127 22:06:21.492458 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nwp4c" Jan 27 22:06:21 crc kubenswrapper[4803]: I0127 22:06:21.495781 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kbt97"] Jan 27 22:06:21 crc kubenswrapper[4803]: I0127 22:06:21.569731 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl45v\" (UniqueName: \"kubernetes.io/projected/94cd0db2-ba1f-4eea-8d28-10aa293e7645-kube-api-access-cl45v\") pod \"openstack-operator-index-kbt97\" (UID: \"94cd0db2-ba1f-4eea-8d28-10aa293e7645\") " pod="openstack-operators/openstack-operator-index-kbt97" Jan 27 22:06:21 crc kubenswrapper[4803]: I0127 22:06:21.671311 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl45v\" (UniqueName: \"kubernetes.io/projected/94cd0db2-ba1f-4eea-8d28-10aa293e7645-kube-api-access-cl45v\") pod \"openstack-operator-index-kbt97\" (UID: \"94cd0db2-ba1f-4eea-8d28-10aa293e7645\") " pod="openstack-operators/openstack-operator-index-kbt97" Jan 27 22:06:21 crc kubenswrapper[4803]: I0127 22:06:21.691134 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl45v\" (UniqueName: \"kubernetes.io/projected/94cd0db2-ba1f-4eea-8d28-10aa293e7645-kube-api-access-cl45v\") pod \"openstack-operator-index-kbt97\" (UID: \"94cd0db2-ba1f-4eea-8d28-10aa293e7645\") " pod="openstack-operators/openstack-operator-index-kbt97" Jan 27 22:06:21 crc kubenswrapper[4803]: I0127 22:06:21.803255 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kbt97" Jan 27 22:06:22 crc kubenswrapper[4803]: I0127 22:06:22.265690 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kbt97"] Jan 27 22:06:22 crc kubenswrapper[4803]: I0127 22:06:22.531257 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kbt97" event={"ID":"94cd0db2-ba1f-4eea-8d28-10aa293e7645","Type":"ContainerStarted","Data":"0970fb25f1687e021ad760325d1a2e15bba0b7870a0a6b48180c122a8f9d7c91"} Jan 27 22:06:24 crc kubenswrapper[4803]: I0127 22:06:24.283468 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kbt97"] Jan 27 22:06:24 crc kubenswrapper[4803]: I0127 22:06:24.550991 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kbt97" event={"ID":"94cd0db2-ba1f-4eea-8d28-10aa293e7645","Type":"ContainerStarted","Data":"104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c"} Jan 27 22:06:24 crc kubenswrapper[4803]: I0127 22:06:24.551115 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-kbt97" podUID="94cd0db2-ba1f-4eea-8d28-10aa293e7645" containerName="registry-server" containerID="cri-o://104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c" gracePeriod=2 Jan 27 22:06:24 crc kubenswrapper[4803]: I0127 22:06:24.573973 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kbt97" podStartSLOduration=1.604084681 podStartE2EDuration="3.573953454s" podCreationTimestamp="2026-01-27 22:06:21 +0000 UTC" firstStartedPulling="2026-01-27 22:06:22.265519918 +0000 UTC m=+1134.681541617" lastFinishedPulling="2026-01-27 22:06:24.235388691 +0000 UTC m=+1136.651410390" observedRunningTime="2026-01-27 22:06:24.564215041 +0000 UTC m=+1136.980236730" watchObservedRunningTime="2026-01-27 22:06:24.573953454 +0000 UTC m=+1136.989975153" Jan 27 22:06:24 crc kubenswrapper[4803]: I0127 22:06:24.887833 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tp8d4"] Jan 27 22:06:24 crc kubenswrapper[4803]: I0127 22:06:24.889128 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 22:06:24 crc kubenswrapper[4803]: I0127 22:06:24.895804 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tp8d4"] Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.046292 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm27s\" (UniqueName: \"kubernetes.io/projected/b438c007-ef5f-4ed3-8f81-c5ac6d0209ac-kube-api-access-hm27s\") pod \"openstack-operator-index-tp8d4\" (UID: \"b438c007-ef5f-4ed3-8f81-c5ac6d0209ac\") " pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.071176 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kbt97" Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.148194 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm27s\" (UniqueName: \"kubernetes.io/projected/b438c007-ef5f-4ed3-8f81-c5ac6d0209ac-kube-api-access-hm27s\") pod \"openstack-operator-index-tp8d4\" (UID: \"b438c007-ef5f-4ed3-8f81-c5ac6d0209ac\") " pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.171118 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm27s\" (UniqueName: \"kubernetes.io/projected/b438c007-ef5f-4ed3-8f81-c5ac6d0209ac-kube-api-access-hm27s\") pod \"openstack-operator-index-tp8d4\" (UID: \"b438c007-ef5f-4ed3-8f81-c5ac6d0209ac\") " pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.216564 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.249147 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl45v\" (UniqueName: \"kubernetes.io/projected/94cd0db2-ba1f-4eea-8d28-10aa293e7645-kube-api-access-cl45v\") pod \"94cd0db2-ba1f-4eea-8d28-10aa293e7645\" (UID: \"94cd0db2-ba1f-4eea-8d28-10aa293e7645\") " Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.253570 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94cd0db2-ba1f-4eea-8d28-10aa293e7645-kube-api-access-cl45v" (OuterVolumeSpecName: "kube-api-access-cl45v") pod "94cd0db2-ba1f-4eea-8d28-10aa293e7645" (UID: "94cd0db2-ba1f-4eea-8d28-10aa293e7645"). InnerVolumeSpecName "kube-api-access-cl45v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.351059 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl45v\" (UniqueName: \"kubernetes.io/projected/94cd0db2-ba1f-4eea-8d28-10aa293e7645-kube-api-access-cl45v\") on node \"crc\" DevicePath \"\"" Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.560774 4803 generic.go:334] "Generic (PLEG): container finished" podID="94cd0db2-ba1f-4eea-8d28-10aa293e7645" containerID="104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c" exitCode=0 Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.560816 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kbt97" event={"ID":"94cd0db2-ba1f-4eea-8d28-10aa293e7645","Type":"ContainerDied","Data":"104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c"} Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.560902 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kbt97" event={"ID":"94cd0db2-ba1f-4eea-8d28-10aa293e7645","Type":"ContainerDied","Data":"0970fb25f1687e021ad760325d1a2e15bba0b7870a0a6b48180c122a8f9d7c91"} Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.560924 4803 scope.go:117] "RemoveContainer" containerID="104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c" Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.560831 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kbt97" Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.586377 4803 scope.go:117] "RemoveContainer" containerID="104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c" Jan 27 22:06:25 crc kubenswrapper[4803]: E0127 22:06:25.586763 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c\": container with ID starting with 104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c not found: ID does not exist" containerID="104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c" Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.586803 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c"} err="failed to get container status \"104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c\": rpc error: code = NotFound desc = could not find container \"104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c\": container with ID starting with 104e4ff2f31cb1a4a3a99be404643b2a979ce8a5263869eca4f2faae7c093e3c not found: ID does not exist" Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.590058 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kbt97"] Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.595128 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-kbt97"] Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.645884 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tp8d4"] Jan 27 22:06:25 crc kubenswrapper[4803]: W0127 22:06:25.648468 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb438c007_ef5f_4ed3_8f81_c5ac6d0209ac.slice/crio-dd10222652822f95020d29edd3260cc40f7aa85c9dcb25094675ce8c616c6cad WatchSource:0}: Error finding container dd10222652822f95020d29edd3260cc40f7aa85c9dcb25094675ce8c616c6cad: Status 404 returned error can't find the container with id dd10222652822f95020d29edd3260cc40f7aa85c9dcb25094675ce8c616c6cad Jan 27 22:06:25 crc kubenswrapper[4803]: I0127 22:06:25.945049 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jsxr8" Jan 27 22:06:26 crc kubenswrapper[4803]: I0127 22:06:26.325055 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94cd0db2-ba1f-4eea-8d28-10aa293e7645" path="/var/lib/kubelet/pods/94cd0db2-ba1f-4eea-8d28-10aa293e7645/volumes" Jan 27 22:06:26 crc kubenswrapper[4803]: I0127 22:06:26.571048 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tp8d4" event={"ID":"b438c007-ef5f-4ed3-8f81-c5ac6d0209ac","Type":"ContainerStarted","Data":"81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17"} Jan 27 22:06:26 crc kubenswrapper[4803]: I0127 22:06:26.571371 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tp8d4" event={"ID":"b438c007-ef5f-4ed3-8f81-c5ac6d0209ac","Type":"ContainerStarted","Data":"dd10222652822f95020d29edd3260cc40f7aa85c9dcb25094675ce8c616c6cad"} Jan 27 22:06:26 crc kubenswrapper[4803]: I0127 22:06:26.592503 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tp8d4" podStartSLOduration=2.534284318 podStartE2EDuration="2.592483467s" podCreationTimestamp="2026-01-27 22:06:24 +0000 UTC" firstStartedPulling="2026-01-27 22:06:25.652506268 +0000 UTC m=+1138.068527967" lastFinishedPulling="2026-01-27 22:06:25.710705417 +0000 UTC m=+1138.126727116" observedRunningTime="2026-01-27 22:06:26.588368397 +0000 UTC m=+1139.004390116" watchObservedRunningTime="2026-01-27 22:06:26.592483467 +0000 UTC m=+1139.008505176" Jan 27 22:06:35 crc kubenswrapper[4803]: I0127 22:06:35.216924 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 22:06:35 crc kubenswrapper[4803]: I0127 22:06:35.217530 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 22:06:35 crc kubenswrapper[4803]: I0127 22:06:35.253927 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 22:06:35 crc kubenswrapper[4803]: I0127 22:06:35.686670 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.738169 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z"] Jan 27 22:06:50 crc kubenswrapper[4803]: E0127 22:06:50.739408 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94cd0db2-ba1f-4eea-8d28-10aa293e7645" containerName="registry-server" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.739428 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="94cd0db2-ba1f-4eea-8d28-10aa293e7645" containerName="registry-server" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.739636 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="94cd0db2-ba1f-4eea-8d28-10aa293e7645" containerName="registry-server" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.741554 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.745037 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2t9xk" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.751591 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z"] Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.810298 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-util\") pod \"237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.810401 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8c6n\" (UniqueName: \"kubernetes.io/projected/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-kube-api-access-l8c6n\") pod \"237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.810434 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-bundle\") pod \"237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.911628 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-util\") pod \"237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.911708 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8c6n\" (UniqueName: \"kubernetes.io/projected/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-kube-api-access-l8c6n\") pod \"237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.911742 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-bundle\") pod \"237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.912134 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-util\") pod \"237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.912252 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-bundle\") pod \"237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:50 crc kubenswrapper[4803]: I0127 22:06:50.934610 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8c6n\" (UniqueName: \"kubernetes.io/projected/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-kube-api-access-l8c6n\") pod \"237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:51 crc kubenswrapper[4803]: I0127 22:06:51.100971 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:51 crc kubenswrapper[4803]: I0127 22:06:51.573455 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z"] Jan 27 22:06:51 crc kubenswrapper[4803]: W0127 22:06:51.583639 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b8a86ce_01e7_4e15_9da4_d2a34f35acbb.slice/crio-b7f68462b5c511a5157c33354613498e7019d2e5bc51a2c735e66192d0b94c61 WatchSource:0}: Error finding container b7f68462b5c511a5157c33354613498e7019d2e5bc51a2c735e66192d0b94c61: Status 404 returned error can't find the container with id b7f68462b5c511a5157c33354613498e7019d2e5bc51a2c735e66192d0b94c61 Jan 27 22:06:51 crc kubenswrapper[4803]: I0127 22:06:51.817719 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" event={"ID":"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb","Type":"ContainerStarted","Data":"ea43610b707e5b569aeae3bbc275cf525b6221715a4c176b5a449f38c5a2244c"} Jan 27 22:06:51 crc kubenswrapper[4803]: I0127 22:06:51.818054 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" event={"ID":"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb","Type":"ContainerStarted","Data":"b7f68462b5c511a5157c33354613498e7019d2e5bc51a2c735e66192d0b94c61"} Jan 27 22:06:52 crc kubenswrapper[4803]: I0127 22:06:52.827031 4803 generic.go:334] "Generic (PLEG): container finished" podID="2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" containerID="ea43610b707e5b569aeae3bbc275cf525b6221715a4c176b5a449f38c5a2244c" exitCode=0 Jan 27 22:06:52 crc kubenswrapper[4803]: I0127 22:06:52.827096 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" event={"ID":"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb","Type":"ContainerDied","Data":"ea43610b707e5b569aeae3bbc275cf525b6221715a4c176b5a449f38c5a2244c"} Jan 27 22:06:53 crc kubenswrapper[4803]: I0127 22:06:53.835512 4803 generic.go:334] "Generic (PLEG): container finished" podID="2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" containerID="bcec11b48a151d66f4e393909bacece3e0a42f0478cfedfe9e67d0f50b2542d9" exitCode=0 Jan 27 22:06:53 crc kubenswrapper[4803]: I0127 22:06:53.835563 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" event={"ID":"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb","Type":"ContainerDied","Data":"bcec11b48a151d66f4e393909bacece3e0a42f0478cfedfe9e67d0f50b2542d9"} Jan 27 22:06:54 crc kubenswrapper[4803]: I0127 22:06:54.844218 4803 generic.go:334] "Generic (PLEG): container finished" podID="2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" containerID="a25d893e2b189a13d77f34146405af0fb7c2b8663398172d4457397cb7ac95ae" exitCode=0 Jan 27 22:06:54 crc kubenswrapper[4803]: I0127 22:06:54.844298 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" event={"ID":"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb","Type":"ContainerDied","Data":"a25d893e2b189a13d77f34146405af0fb7c2b8663398172d4457397cb7ac95ae"} Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.186107 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.200462 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-util\") pod \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.200613 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8c6n\" (UniqueName: \"kubernetes.io/projected/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-kube-api-access-l8c6n\") pod \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.200720 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-bundle\") pod \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\" (UID: \"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb\") " Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.201818 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-bundle" (OuterVolumeSpecName: "bundle") pod "2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" (UID: "2b8a86ce-01e7-4e15-9da4-d2a34f35acbb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.209044 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-kube-api-access-l8c6n" (OuterVolumeSpecName: "kube-api-access-l8c6n") pod "2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" (UID: "2b8a86ce-01e7-4e15-9da4-d2a34f35acbb"). InnerVolumeSpecName "kube-api-access-l8c6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.240928 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-util" (OuterVolumeSpecName: "util") pod "2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" (UID: "2b8a86ce-01e7-4e15-9da4-d2a34f35acbb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.301950 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8c6n\" (UniqueName: \"kubernetes.io/projected/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-kube-api-access-l8c6n\") on node \"crc\" DevicePath \"\"" Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.301985 4803 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.301994 4803 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b8a86ce-01e7-4e15-9da4-d2a34f35acbb-util\") on node \"crc\" DevicePath \"\"" Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.866633 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" event={"ID":"2b8a86ce-01e7-4e15-9da4-d2a34f35acbb","Type":"ContainerDied","Data":"b7f68462b5c511a5157c33354613498e7019d2e5bc51a2c735e66192d0b94c61"} Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.867040 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7f68462b5c511a5157c33354613498e7019d2e5bc51a2c735e66192d0b94c61" Jan 27 22:06:56 crc kubenswrapper[4803]: I0127 22:06:56.867148 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z" Jan 27 22:07:02 crc kubenswrapper[4803]: I0127 22:07:02.784042 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5"] Jan 27 22:07:02 crc kubenswrapper[4803]: E0127 22:07:02.786868 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" containerName="pull" Jan 27 22:07:02 crc kubenswrapper[4803]: I0127 22:07:02.786903 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" containerName="pull" Jan 27 22:07:02 crc kubenswrapper[4803]: E0127 22:07:02.786928 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" containerName="extract" Jan 27 22:07:02 crc kubenswrapper[4803]: I0127 22:07:02.786934 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" containerName="extract" Jan 27 22:07:02 crc kubenswrapper[4803]: E0127 22:07:02.786950 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" containerName="util" Jan 27 22:07:02 crc kubenswrapper[4803]: I0127 22:07:02.786956 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" containerName="util" Jan 27 22:07:02 crc kubenswrapper[4803]: I0127 22:07:02.787349 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8a86ce-01e7-4e15-9da4-d2a34f35acbb" containerName="extract" Jan 27 22:07:02 crc kubenswrapper[4803]: I0127 22:07:02.787948 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" Jan 27 22:07:02 crc kubenswrapper[4803]: I0127 22:07:02.790787 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-z5r9t" Jan 27 22:07:02 crc kubenswrapper[4803]: I0127 22:07:02.808074 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5"] Jan 27 22:07:02 crc kubenswrapper[4803]: I0127 22:07:02.910143 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9x8t\" (UniqueName: \"kubernetes.io/projected/e163066d-c764-49e0-9119-cbeb4f4fe50b-kube-api-access-w9x8t\") pod \"openstack-operator-controller-init-75cd85946-nk8z5\" (UID: \"e163066d-c764-49e0-9119-cbeb4f4fe50b\") " pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" Jan 27 22:07:03 crc kubenswrapper[4803]: I0127 22:07:03.012289 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9x8t\" (UniqueName: \"kubernetes.io/projected/e163066d-c764-49e0-9119-cbeb4f4fe50b-kube-api-access-w9x8t\") pod \"openstack-operator-controller-init-75cd85946-nk8z5\" (UID: \"e163066d-c764-49e0-9119-cbeb4f4fe50b\") " pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" Jan 27 22:07:03 crc kubenswrapper[4803]: I0127 22:07:03.033478 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9x8t\" (UniqueName: \"kubernetes.io/projected/e163066d-c764-49e0-9119-cbeb4f4fe50b-kube-api-access-w9x8t\") pod \"openstack-operator-controller-init-75cd85946-nk8z5\" (UID: \"e163066d-c764-49e0-9119-cbeb4f4fe50b\") " pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" Jan 27 22:07:03 crc kubenswrapper[4803]: I0127 22:07:03.118449 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" Jan 27 22:07:03 crc kubenswrapper[4803]: I0127 22:07:03.577488 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5"] Jan 27 22:07:03 crc kubenswrapper[4803]: I0127 22:07:03.930723 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" event={"ID":"e163066d-c764-49e0-9119-cbeb4f4fe50b","Type":"ContainerStarted","Data":"09c2c42a89b88ed9aa4b35557b983f817a8d8a99a9d76d63b321316a64107d22"} Jan 27 22:07:08 crc kubenswrapper[4803]: I0127 22:07:08.991952 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" event={"ID":"e163066d-c764-49e0-9119-cbeb4f4fe50b","Type":"ContainerStarted","Data":"7fad479640276018881a415ca023c41dae84413784cba3054be1418d5405f81b"} Jan 27 22:07:08 crc kubenswrapper[4803]: I0127 22:07:08.992570 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" Jan 27 22:07:09 crc kubenswrapper[4803]: I0127 22:07:09.058330 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" podStartSLOduration=2.75428645 podStartE2EDuration="7.058312101s" podCreationTimestamp="2026-01-27 22:07:02 +0000 UTC" firstStartedPulling="2026-01-27 22:07:03.582052102 +0000 UTC m=+1175.998073801" lastFinishedPulling="2026-01-27 22:07:07.886077753 +0000 UTC m=+1180.302099452" observedRunningTime="2026-01-27 22:07:09.05417578 +0000 UTC m=+1181.470197479" watchObservedRunningTime="2026-01-27 22:07:09.058312101 +0000 UTC m=+1181.474333800" Jan 27 22:07:13 crc kubenswrapper[4803]: I0127 22:07:13.120817 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.495763 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.497624 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.499965 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-m6vgc" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.505142 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.506653 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.508582 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-b8ncm" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.524274 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.532410 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.541958 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.543157 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.550387 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-g5ctc" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.554944 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.556619 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.560247 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-nhx94" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.581210 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.583401 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mxtp\" (UniqueName: \"kubernetes.io/projected/eac7ef2c-904d-429b-ac3f-a43a72339fde-kube-api-access-7mxtp\") pod \"barbican-operator-controller-manager-7f86f8796f-5qnbd\" (UID: \"eac7ef2c-904d-429b-ac3f-a43a72339fde\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.594709 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.626184 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.627195 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.630688 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-tpc2q" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.670258 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.688823 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvgwx\" (UniqueName: \"kubernetes.io/projected/51221b4b-024e-4134-8baa-a9478c8c596a-kube-api-access-bvgwx\") pod \"designate-operator-controller-manager-b45d7bf98-hxpmk\" (UID: \"51221b4b-024e-4134-8baa-a9478c8c596a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.688927 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf4f5\" (UniqueName: \"kubernetes.io/projected/47dce22a-001c-4774-ab99-28cd85420e1c-kube-api-access-rf4f5\") pod \"cinder-operator-controller-manager-7478f7dbf9-t9ng6\" (UID: \"47dce22a-001c-4774-ab99-28cd85420e1c\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.688978 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdsxp\" (UniqueName: \"kubernetes.io/projected/c6f78887-1cda-463f-ab3f-57703bfb7a41-kube-api-access-tdsxp\") pod \"glance-operator-controller-manager-78fdd796fd-pcnl7\" (UID: \"c6f78887-1cda-463f-ab3f-57703bfb7a41\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.689006 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mxtp\" (UniqueName: \"kubernetes.io/projected/eac7ef2c-904d-429b-ac3f-a43a72339fde-kube-api-access-7mxtp\") pod \"barbican-operator-controller-manager-7f86f8796f-5qnbd\" (UID: \"eac7ef2c-904d-429b-ac3f-a43a72339fde\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.698907 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.700293 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.706432 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9tns8" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.718912 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.720301 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.723188 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mxtp\" (UniqueName: \"kubernetes.io/projected/eac7ef2c-904d-429b-ac3f-a43a72339fde-kube-api-access-7mxtp\") pod \"barbican-operator-controller-manager-7f86f8796f-5qnbd\" (UID: \"eac7ef2c-904d-429b-ac3f-a43a72339fde\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.734262 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.734337 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qddnp" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.740025 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.765190 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.766181 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.776797 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-77x7j" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.791491 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf4f5\" (UniqueName: \"kubernetes.io/projected/47dce22a-001c-4774-ab99-28cd85420e1c-kube-api-access-rf4f5\") pod \"cinder-operator-controller-manager-7478f7dbf9-t9ng6\" (UID: \"47dce22a-001c-4774-ab99-28cd85420e1c\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.791589 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdqj\" (UniqueName: \"kubernetes.io/projected/f8498dfc-1b67-4783-9389-10d5b30b2860-kube-api-access-stdqj\") pod \"heat-operator-controller-manager-594c8c9d5d-2sffc\" (UID: \"f8498dfc-1b67-4783-9389-10d5b30b2860\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.791626 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdsxp\" (UniqueName: \"kubernetes.io/projected/c6f78887-1cda-463f-ab3f-57703bfb7a41-kube-api-access-tdsxp\") pod \"glance-operator-controller-manager-78fdd796fd-pcnl7\" (UID: \"c6f78887-1cda-463f-ab3f-57703bfb7a41\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.791660 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgtxk\" (UniqueName: \"kubernetes.io/projected/9c6792d4-9d18-4d1c-b855-65aba5ae4919-kube-api-access-rgtxk\") pod \"horizon-operator-controller-manager-77d5c5b54f-7sjdg\" (UID: \"9c6792d4-9d18-4d1c-b855-65aba5ae4919\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.791709 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvgwx\" (UniqueName: \"kubernetes.io/projected/51221b4b-024e-4134-8baa-a9478c8c596a-kube-api-access-bvgwx\") pod \"designate-operator-controller-manager-b45d7bf98-hxpmk\" (UID: \"51221b4b-024e-4134-8baa-a9478c8c596a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.797058 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.814162 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvgwx\" (UniqueName: \"kubernetes.io/projected/51221b4b-024e-4134-8baa-a9478c8c596a-kube-api-access-bvgwx\") pod \"designate-operator-controller-manager-b45d7bf98-hxpmk\" (UID: \"51221b4b-024e-4134-8baa-a9478c8c596a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.819910 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.820924 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.823256 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.830516 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdsxp\" (UniqueName: \"kubernetes.io/projected/c6f78887-1cda-463f-ab3f-57703bfb7a41-kube-api-access-tdsxp\") pod \"glance-operator-controller-manager-78fdd796fd-pcnl7\" (UID: \"c6f78887-1cda-463f-ab3f-57703bfb7a41\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.834124 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.835373 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ns484" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.855655 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf4f5\" (UniqueName: \"kubernetes.io/projected/47dce22a-001c-4774-ab99-28cd85420e1c-kube-api-access-rf4f5\") pod \"cinder-operator-controller-manager-7478f7dbf9-t9ng6\" (UID: \"47dce22a-001c-4774-ab99-28cd85420e1c\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.876634 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.877002 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.894255 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.899744 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ctb5\" (UniqueName: \"kubernetes.io/projected/1f1cd413-71e0-443e-95cf-e5d46a745b1b-kube-api-access-8ctb5\") pod \"keystone-operator-controller-manager-b8b6d4659-r5dqr\" (UID: \"1f1cd413-71e0-443e-95cf-e5d46a745b1b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.899814 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert\") pod \"infra-operator-controller-manager-694cf4f878-nxlck\" (UID: \"e9d93e19-7c2b-4d53-bfe8-7b0157dec931\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.899860 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stdqj\" (UniqueName: \"kubernetes.io/projected/f8498dfc-1b67-4783-9389-10d5b30b2860-kube-api-access-stdqj\") pod \"heat-operator-controller-manager-594c8c9d5d-2sffc\" (UID: \"f8498dfc-1b67-4783-9389-10d5b30b2860\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.899893 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rhsl\" (UniqueName: \"kubernetes.io/projected/29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8-kube-api-access-2rhsl\") pod \"ironic-operator-controller-manager-598f7747c9-w8nw7\" (UID: \"29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.899926 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgtxk\" (UniqueName: \"kubernetes.io/projected/9c6792d4-9d18-4d1c-b855-65aba5ae4919-kube-api-access-rgtxk\") pod \"horizon-operator-controller-manager-77d5c5b54f-7sjdg\" (UID: \"9c6792d4-9d18-4d1c-b855-65aba5ae4919\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.899951 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcrxv\" (UniqueName: \"kubernetes.io/projected/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-kube-api-access-kcrxv\") pod \"infra-operator-controller-manager-694cf4f878-nxlck\" (UID: \"e9d93e19-7c2b-4d53-bfe8-7b0157dec931\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.912998 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.914011 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.920110 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pn8g2" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.936811 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stdqj\" (UniqueName: \"kubernetes.io/projected/f8498dfc-1b67-4783-9389-10d5b30b2860-kube-api-access-stdqj\") pod \"heat-operator-controller-manager-594c8c9d5d-2sffc\" (UID: \"f8498dfc-1b67-4783-9389-10d5b30b2860\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.936915 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv"] Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.937835 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.949368 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgtxk\" (UniqueName: \"kubernetes.io/projected/9c6792d4-9d18-4d1c-b855-65aba5ae4919-kube-api-access-rgtxk\") pod \"horizon-operator-controller-manager-77d5c5b54f-7sjdg\" (UID: \"9c6792d4-9d18-4d1c-b855-65aba5ae4919\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.958600 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-sv88x" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.959016 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" Jan 27 22:07:41 crc kubenswrapper[4803]: I0127 22:07:41.989580 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.000827 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ctb5\" (UniqueName: \"kubernetes.io/projected/1f1cd413-71e0-443e-95cf-e5d46a745b1b-kube-api-access-8ctb5\") pod \"keystone-operator-controller-manager-b8b6d4659-r5dqr\" (UID: \"1f1cd413-71e0-443e-95cf-e5d46a745b1b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.000953 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert\") pod \"infra-operator-controller-manager-694cf4f878-nxlck\" (UID: \"e9d93e19-7c2b-4d53-bfe8-7b0157dec931\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.000997 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rhsl\" (UniqueName: \"kubernetes.io/projected/29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8-kube-api-access-2rhsl\") pod \"ironic-operator-controller-manager-598f7747c9-w8nw7\" (UID: \"29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.001036 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcrxv\" (UniqueName: \"kubernetes.io/projected/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-kube-api-access-kcrxv\") pod \"infra-operator-controller-manager-694cf4f878-nxlck\" (UID: \"e9d93e19-7c2b-4d53-bfe8-7b0157dec931\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.001065 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqppv\" (UniqueName: \"kubernetes.io/projected/35783fb5-ef1c-4b33-beb1-af9fee8512d3-kube-api-access-wqppv\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-26gcs\" (UID: \"35783fb5-ef1c-4b33-beb1-af9fee8512d3\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" Jan 27 22:07:42 crc kubenswrapper[4803]: E0127 22:07:42.001503 4803 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 22:07:42 crc kubenswrapper[4803]: E0127 22:07:42.001550 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert podName:e9d93e19-7c2b-4d53-bfe8-7b0157dec931 nodeName:}" failed. No retries permitted until 2026-01-27 22:07:42.501534967 +0000 UTC m=+1214.917556666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert") pod "infra-operator-controller-manager-694cf4f878-nxlck" (UID: "e9d93e19-7c2b-4d53-bfe8-7b0157dec931") : secret "infra-operator-webhook-server-cert" not found Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.020538 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.074705 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcrxv\" (UniqueName: \"kubernetes.io/projected/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-kube-api-access-kcrxv\") pod \"infra-operator-controller-manager-694cf4f878-nxlck\" (UID: \"e9d93e19-7c2b-4d53-bfe8-7b0157dec931\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.075341 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rhsl\" (UniqueName: \"kubernetes.io/projected/29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8-kube-api-access-2rhsl\") pod \"ironic-operator-controller-manager-598f7747c9-w8nw7\" (UID: \"29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.075627 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ctb5\" (UniqueName: \"kubernetes.io/projected/1f1cd413-71e0-443e-95cf-e5d46a745b1b-kube-api-access-8ctb5\") pod \"keystone-operator-controller-manager-b8b6d4659-r5dqr\" (UID: \"1f1cd413-71e0-443e-95cf-e5d46a745b1b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.076241 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.086890 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.088934 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.094840 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.095307 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bjk98" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.109444 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls2l2\" (UniqueName: \"kubernetes.io/projected/662a79ef-9928-408c-8cfb-62945e0b6725-kube-api-access-ls2l2\") pod \"manila-operator-controller-manager-78c6999f6f-h9xdv\" (UID: \"662a79ef-9928-408c-8cfb-62945e0b6725\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.110478 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqppv\" (UniqueName: \"kubernetes.io/projected/35783fb5-ef1c-4b33-beb1-af9fee8512d3-kube-api-access-wqppv\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-26gcs\" (UID: \"35783fb5-ef1c-4b33-beb1-af9fee8512d3\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.113500 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.133381 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.134904 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.135225 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.136930 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqppv\" (UniqueName: \"kubernetes.io/projected/35783fb5-ef1c-4b33-beb1-af9fee8512d3-kube-api-access-wqppv\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-26gcs\" (UID: \"35783fb5-ef1c-4b33-beb1-af9fee8512d3\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.137648 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ngtwj" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.155451 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.159762 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.167135 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.170908 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-lnqrz" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.177037 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.215828 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjnm\" (UniqueName: \"kubernetes.io/projected/b6c89c2e-a080-4d20-bc81-bda0f9eb17b6-kube-api-access-xqjnm\") pod \"nova-operator-controller-manager-7bdb645866-gst8v\" (UID: \"b6c89c2e-a080-4d20-bc81-bda0f9eb17b6\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.215954 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsvjw\" (UniqueName: \"kubernetes.io/projected/c46ecfda-be7b-4f42-9874-a8a94f71188f-kube-api-access-vsvjw\") pod \"neutron-operator-controller-manager-78d58447c5-t9zrn\" (UID: \"c46ecfda-be7b-4f42-9874-a8a94f71188f\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.215982 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls2l2\" (UniqueName: \"kubernetes.io/projected/662a79ef-9928-408c-8cfb-62945e0b6725-kube-api-access-ls2l2\") pod \"manila-operator-controller-manager-78c6999f6f-h9xdv\" (UID: \"662a79ef-9928-408c-8cfb-62945e0b6725\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.248278 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls2l2\" (UniqueName: \"kubernetes.io/projected/662a79ef-9928-408c-8cfb-62945e0b6725-kube-api-access-ls2l2\") pod \"manila-operator-controller-manager-78c6999f6f-h9xdv\" (UID: \"662a79ef-9928-408c-8cfb-62945e0b6725\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.270038 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.275515 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.282746 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.284753 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sv64v" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.287427 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.288493 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.291596 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-j4jnd" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.308400 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.309569 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.310275 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.316117 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-cj2r4" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.317218 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pddgd\" (UniqueName: \"kubernetes.io/projected/7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79-kube-api-access-pddgd\") pod \"octavia-operator-controller-manager-5f4cd88d46-qg2hw\" (UID: \"7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.317322 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjnm\" (UniqueName: \"kubernetes.io/projected/b6c89c2e-a080-4d20-bc81-bda0f9eb17b6-kube-api-access-xqjnm\") pod \"nova-operator-controller-manager-7bdb645866-gst8v\" (UID: \"b6c89c2e-a080-4d20-bc81-bda0f9eb17b6\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.317422 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsvjw\" (UniqueName: \"kubernetes.io/projected/c46ecfda-be7b-4f42-9874-a8a94f71188f-kube-api-access-vsvjw\") pod \"neutron-operator-controller-manager-78d58447c5-t9zrn\" (UID: \"c46ecfda-be7b-4f42-9874-a8a94f71188f\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.318234 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.343988 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.350990 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjnm\" (UniqueName: \"kubernetes.io/projected/b6c89c2e-a080-4d20-bc81-bda0f9eb17b6-kube-api-access-xqjnm\") pod \"nova-operator-controller-manager-7bdb645866-gst8v\" (UID: \"b6c89c2e-a080-4d20-bc81-bda0f9eb17b6\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.351502 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsvjw\" (UniqueName: \"kubernetes.io/projected/c46ecfda-be7b-4f42-9874-a8a94f71188f-kube-api-access-vsvjw\") pod \"neutron-operator-controller-manager-78d58447c5-t9zrn\" (UID: \"c46ecfda-be7b-4f42-9874-a8a94f71188f\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.369363 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.369397 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.416943 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.418350 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.421463 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.421656 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pddgd\" (UniqueName: \"kubernetes.io/projected/7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79-kube-api-access-pddgd\") pod \"octavia-operator-controller-manager-5f4cd88d46-qg2hw\" (UID: \"7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.423107 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.423281 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qds9w\" (UniqueName: \"kubernetes.io/projected/5bedb1c3-9c5a-4137-851d-33b1723a3221-kube-api-access-qds9w\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.423384 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4v2w\" (UniqueName: \"kubernetes.io/projected/35742b16-a222-4602-ae0a-d078eafb1ea1-kube-api-access-v4v2w\") pod \"placement-operator-controller-manager-79d5ccc684-prltl\" (UID: \"35742b16-a222-4602-ae0a-d078eafb1ea1\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.423581 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q94wv\" (UniqueName: \"kubernetes.io/projected/0592ab2d-4ade-4747-a823-73cd5dcac047-kube-api-access-q94wv\") pod \"ovn-operator-controller-manager-6f75f45d54-hcwxh\" (UID: \"0592ab2d-4ade-4747-a823-73cd5dcac047\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.422971 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.422029 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-ntt9w" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.451394 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.453354 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pddgd\" (UniqueName: \"kubernetes.io/projected/7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79-kube-api-access-pddgd\") pod \"octavia-operator-controller-manager-5f4cd88d46-qg2hw\" (UID: \"7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.459359 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.464924 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.465971 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.467953 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-25ts4" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.475713 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.476833 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.479912 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hghwp" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.493157 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.502411 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.524551 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.524596 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qds9w\" (UniqueName: \"kubernetes.io/projected/5bedb1c3-9c5a-4137-851d-33b1723a3221-kube-api-access-qds9w\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.524617 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4v2w\" (UniqueName: \"kubernetes.io/projected/35742b16-a222-4602-ae0a-d078eafb1ea1-kube-api-access-v4v2w\") pod \"placement-operator-controller-manager-79d5ccc684-prltl\" (UID: \"35742b16-a222-4602-ae0a-d078eafb1ea1\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.524660 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert\") pod \"infra-operator-controller-manager-694cf4f878-nxlck\" (UID: \"e9d93e19-7c2b-4d53-bfe8-7b0157dec931\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.524687 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6sr2\" (UniqueName: \"kubernetes.io/projected/eae71f44-8628-4436-be64-9ac3aa8f9255-kube-api-access-m6sr2\") pod \"swift-operator-controller-manager-547cbdb99f-4rzpc\" (UID: \"eae71f44-8628-4436-be64-9ac3aa8f9255\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.524738 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q94wv\" (UniqueName: \"kubernetes.io/projected/0592ab2d-4ade-4747-a823-73cd5dcac047-kube-api-access-q94wv\") pod \"ovn-operator-controller-manager-6f75f45d54-hcwxh\" (UID: \"0592ab2d-4ade-4747-a823-73cd5dcac047\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" Jan 27 22:07:42 crc kubenswrapper[4803]: E0127 22:07:42.525098 4803 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 22:07:42 crc kubenswrapper[4803]: E0127 22:07:42.525172 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert podName:e9d93e19-7c2b-4d53-bfe8-7b0157dec931 nodeName:}" failed. No retries permitted until 2026-01-27 22:07:43.525150747 +0000 UTC m=+1215.941172446 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert") pod "infra-operator-controller-manager-694cf4f878-nxlck" (UID: "e9d93e19-7c2b-4d53-bfe8-7b0157dec931") : secret "infra-operator-webhook-server-cert" not found Jan 27 22:07:42 crc kubenswrapper[4803]: E0127 22:07:42.525563 4803 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:42 crc kubenswrapper[4803]: E0127 22:07:42.525589 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert podName:5bedb1c3-9c5a-4137-851d-33b1723a3221 nodeName:}" failed. No retries permitted until 2026-01-27 22:07:43.025581909 +0000 UTC m=+1215.441603708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" (UID: "5bedb1c3-9c5a-4137-851d-33b1723a3221") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.543384 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q94wv\" (UniqueName: \"kubernetes.io/projected/0592ab2d-4ade-4747-a823-73cd5dcac047-kube-api-access-q94wv\") pod \"ovn-operator-controller-manager-6f75f45d54-hcwxh\" (UID: \"0592ab2d-4ade-4747-a823-73cd5dcac047\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.543885 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qds9w\" (UniqueName: \"kubernetes.io/projected/5bedb1c3-9c5a-4137-851d-33b1723a3221-kube-api-access-qds9w\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.546527 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.547208 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4v2w\" (UniqueName: \"kubernetes.io/projected/35742b16-a222-4602-ae0a-d078eafb1ea1-kube-api-access-v4v2w\") pod \"placement-operator-controller-manager-79d5ccc684-prltl\" (UID: \"35742b16-a222-4602-ae0a-d078eafb1ea1\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.569288 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-tz8ql"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.571400 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.575420 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-chbz6" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.618752 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-tz8ql"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.626715 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cst6c\" (UniqueName: \"kubernetes.io/projected/9dde9803-1302-4f0f-a353-1313e3696d7b-kube-api-access-cst6c\") pod \"telemetry-operator-controller-manager-7948f6cfb4-mpkbs\" (UID: \"9dde9803-1302-4f0f-a353-1313e3696d7b\") " pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.626783 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6sr2\" (UniqueName: \"kubernetes.io/projected/eae71f44-8628-4436-be64-9ac3aa8f9255-kube-api-access-m6sr2\") pod \"swift-operator-controller-manager-547cbdb99f-4rzpc\" (UID: \"eae71f44-8628-4436-be64-9ac3aa8f9255\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.626863 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rwcg\" (UniqueName: \"kubernetes.io/projected/7b65a167-f9c8-475c-be5b-39e0502352ab-kube-api-access-6rwcg\") pod \"test-operator-controller-manager-69797bbcbd-9hlvn\" (UID: \"7b65a167-f9c8-475c-be5b-39e0502352ab\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.650598 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.652034 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.654919 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.655085 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.655250 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xsq8t" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.655869 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.666457 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.670928 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6sr2\" (UniqueName: \"kubernetes.io/projected/eae71f44-8628-4436-be64-9ac3aa8f9255-kube-api-access-m6sr2\") pod \"swift-operator-controller-manager-547cbdb99f-4rzpc\" (UID: \"eae71f44-8628-4436-be64-9ac3aa8f9255\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.688485 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.722809 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.727352 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.730344 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8q7p\" (UniqueName: \"kubernetes.io/projected/57c28f35-52f1-48aa-ad74-3f66a5cdd52c-kube-api-access-p8q7p\") pod \"watcher-operator-controller-manager-564965969-tz8ql\" (UID: \"57c28f35-52f1-48aa-ad74-3f66a5cdd52c\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.730501 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gnnl\" (UniqueName: \"kubernetes.io/projected/62a498d3-45eb-4117-ba22-041e8d90762d-kube-api-access-9gnnl\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.730595 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cst6c\" (UniqueName: \"kubernetes.io/projected/9dde9803-1302-4f0f-a353-1313e3696d7b-kube-api-access-cst6c\") pod \"telemetry-operator-controller-manager-7948f6cfb4-mpkbs\" (UID: \"9dde9803-1302-4f0f-a353-1313e3696d7b\") " pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.730719 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.730816 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.730890 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rwcg\" (UniqueName: \"kubernetes.io/projected/7b65a167-f9c8-475c-be5b-39e0502352ab-kube-api-access-6rwcg\") pod \"test-operator-controller-manager-69797bbcbd-9hlvn\" (UID: \"7b65a167-f9c8-475c-be5b-39e0502352ab\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.732272 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6r5c7" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.746821 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.751385 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.770758 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rwcg\" (UniqueName: \"kubernetes.io/projected/7b65a167-f9c8-475c-be5b-39e0502352ab-kube-api-access-6rwcg\") pod \"test-operator-controller-manager-69797bbcbd-9hlvn\" (UID: \"7b65a167-f9c8-475c-be5b-39e0502352ab\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.771225 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cst6c\" (UniqueName: \"kubernetes.io/projected/9dde9803-1302-4f0f-a353-1313e3696d7b-kube-api-access-cst6c\") pod \"telemetry-operator-controller-manager-7948f6cfb4-mpkbs\" (UID: \"9dde9803-1302-4f0f-a353-1313e3696d7b\") " pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.804352 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" Jan 27 22:07:42 crc kubenswrapper[4803]: W0127 22:07:42.817369 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeac7ef2c_904d_429b_ac3f_a43a72339fde.slice/crio-c5aeffe792e225e857ad229a318e646a314d8de5722434c782c376b546399107 WatchSource:0}: Error finding container c5aeffe792e225e857ad229a318e646a314d8de5722434c782c376b546399107: Status 404 returned error can't find the container with id c5aeffe792e225e857ad229a318e646a314d8de5722434c782c376b546399107 Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.826880 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.837305 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.837418 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8q7p\" (UniqueName: \"kubernetes.io/projected/57c28f35-52f1-48aa-ad74-3f66a5cdd52c-kube-api-access-p8q7p\") pod \"watcher-operator-controller-manager-564965969-tz8ql\" (UID: \"57c28f35-52f1-48aa-ad74-3f66a5cdd52c\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.837558 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gnnl\" (UniqueName: \"kubernetes.io/projected/62a498d3-45eb-4117-ba22-041e8d90762d-kube-api-access-9gnnl\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.837630 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.837667 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kln8x\" (UniqueName: \"kubernetes.io/projected/293c9c98-184e-45cb-b0be-593f544e49df-kube-api-access-kln8x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5g5g7\" (UID: \"293c9c98-184e-45cb-b0be-593f544e49df\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7" Jan 27 22:07:42 crc kubenswrapper[4803]: E0127 22:07:42.837889 4803 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 22:07:42 crc kubenswrapper[4803]: E0127 22:07:42.837941 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:07:43.337924175 +0000 UTC m=+1215.753945874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "metrics-server-cert" not found Jan 27 22:07:42 crc kubenswrapper[4803]: E0127 22:07:42.838936 4803 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 22:07:42 crc kubenswrapper[4803]: E0127 22:07:42.839006 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:07:43.338976184 +0000 UTC m=+1215.754997933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "webhook-server-cert" not found Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.853271 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.873158 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gnnl\" (UniqueName: \"kubernetes.io/projected/62a498d3-45eb-4117-ba22-041e8d90762d-kube-api-access-9gnnl\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.890331 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8q7p\" (UniqueName: \"kubernetes.io/projected/57c28f35-52f1-48aa-ad74-3f66a5cdd52c-kube-api-access-p8q7p\") pod \"watcher-operator-controller-manager-564965969-tz8ql\" (UID: \"57c28f35-52f1-48aa-ad74-3f66a5cdd52c\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.894511 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.898872 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.902865 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk"] Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.938951 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kln8x\" (UniqueName: \"kubernetes.io/projected/293c9c98-184e-45cb-b0be-593f544e49df-kube-api-access-kln8x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5g5g7\" (UID: \"293c9c98-184e-45cb-b0be-593f544e49df\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7" Jan 27 22:07:42 crc kubenswrapper[4803]: I0127 22:07:42.968257 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kln8x\" (UniqueName: \"kubernetes.io/projected/293c9c98-184e-45cb-b0be-593f544e49df-kube-api-access-kln8x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5g5g7\" (UID: \"293c9c98-184e-45cb-b0be-593f544e49df\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7" Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.041270 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:07:43 crc kubenswrapper[4803]: E0127 22:07:43.041509 4803 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:43 crc kubenswrapper[4803]: E0127 22:07:43.041570 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert podName:5bedb1c3-9c5a-4137-851d-33b1723a3221 nodeName:}" failed. No retries permitted until 2026-01-27 22:07:44.041556492 +0000 UTC m=+1216.457578191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" (UID: "5bedb1c3-9c5a-4137-851d-33b1723a3221") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.056180 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7" Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.264250 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" event={"ID":"51221b4b-024e-4134-8baa-a9478c8c596a","Type":"ContainerStarted","Data":"8e336fb10069b9e3627277921bff1a043b27dfd12d694f28f6c7a81cb054bbae"} Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.265779 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" event={"ID":"eac7ef2c-904d-429b-ac3f-a43a72339fde","Type":"ContainerStarted","Data":"c5aeffe792e225e857ad229a318e646a314d8de5722434c782c376b546399107"} Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.267201 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" event={"ID":"c6f78887-1cda-463f-ab3f-57703bfb7a41","Type":"ContainerStarted","Data":"fc38af29ba61838e2c3bd2f7e9a4c33a3ccc1256442c48b8a6c3fece7cfffbaa"} Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.347764 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.347936 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:43 crc kubenswrapper[4803]: E0127 22:07:43.348072 4803 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 22:07:43 crc kubenswrapper[4803]: E0127 22:07:43.348121 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:07:44.348106544 +0000 UTC m=+1216.764128243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "webhook-server-cert" not found Jan 27 22:07:43 crc kubenswrapper[4803]: E0127 22:07:43.348494 4803 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 22:07:43 crc kubenswrapper[4803]: E0127 22:07:43.348519 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:07:44.348511584 +0000 UTC m=+1216.764533273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "metrics-server-cert" not found Jan 27 22:07:43 crc kubenswrapper[4803]: W0127 22:07:43.370227 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8498dfc_1b67_4783_9389_10d5b30b2860.slice/crio-6cd6cd2cc0178f18d91a5106b68a82e05f871386f169551edfeb0b1ead73a95b WatchSource:0}: Error finding container 6cd6cd2cc0178f18d91a5106b68a82e05f871386f169551edfeb0b1ead73a95b: Status 404 returned error can't find the container with id 6cd6cd2cc0178f18d91a5106b68a82e05f871386f169551edfeb0b1ead73a95b Jan 27 22:07:43 crc kubenswrapper[4803]: W0127 22:07:43.372895 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47dce22a_001c_4774_ab99_28cd85420e1c.slice/crio-a05441626078d0bc7608eb7ec624290e9f7158aa2f40e14b170addb6a9ad2d92 WatchSource:0}: Error finding container a05441626078d0bc7608eb7ec624290e9f7158aa2f40e14b170addb6a9ad2d92: Status 404 returned error can't find the container with id a05441626078d0bc7608eb7ec624290e9f7158aa2f40e14b170addb6a9ad2d92 Jan 27 22:07:43 crc kubenswrapper[4803]: W0127 22:07:43.373220 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c6792d4_9d18_4d1c_b855_65aba5ae4919.slice/crio-914209564ea6fd3627d29cbb523b6eb2d9260a8f2cd40afed14fb15b2547f61b WatchSource:0}: Error finding container 914209564ea6fd3627d29cbb523b6eb2d9260a8f2cd40afed14fb15b2547f61b: Status 404 returned error can't find the container with id 914209564ea6fd3627d29cbb523b6eb2d9260a8f2cd40afed14fb15b2547f61b Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.375343 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg"] Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.384610 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc"] Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.396279 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6"] Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.551459 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert\") pod \"infra-operator-controller-manager-694cf4f878-nxlck\" (UID: \"e9d93e19-7c2b-4d53-bfe8-7b0157dec931\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:43 crc kubenswrapper[4803]: E0127 22:07:43.551688 4803 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 22:07:43 crc kubenswrapper[4803]: E0127 22:07:43.551783 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert podName:e9d93e19-7c2b-4d53-bfe8-7b0157dec931 nodeName:}" failed. No retries permitted until 2026-01-27 22:07:45.551760191 +0000 UTC m=+1217.967781890 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert") pod "infra-operator-controller-manager-694cf4f878-nxlck" (UID: "e9d93e19-7c2b-4d53-bfe8-7b0157dec931") : secret "infra-operator-webhook-server-cert" not found Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.724788 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv"] Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.766436 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs"] Jan 27 22:07:43 crc kubenswrapper[4803]: W0127 22:07:43.782524 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35783fb5_ef1c_4b33_beb1_af9fee8512d3.slice/crio-b330e2080fdb72b180dc70378138de04556b1f93195cf00278ae5eb3d676e5f7 WatchSource:0}: Error finding container b330e2080fdb72b180dc70378138de04556b1f93195cf00278ae5eb3d676e5f7: Status 404 returned error can't find the container with id b330e2080fdb72b180dc70378138de04556b1f93195cf00278ae5eb3d676e5f7 Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.802053 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v"] Jan 27 22:07:43 crc kubenswrapper[4803]: W0127 22:07:43.805571 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc46ecfda_be7b_4f42_9874_a8a94f71188f.slice/crio-601e02020fb3e27996bf8e39c211076c1f2580035abdb1fa0a732837e7ee6d66 WatchSource:0}: Error finding container 601e02020fb3e27996bf8e39c211076c1f2580035abdb1fa0a732837e7ee6d66: Status 404 returned error can't find the container with id 601e02020fb3e27996bf8e39c211076c1f2580035abdb1fa0a732837e7ee6d66 Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.824507 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7"] Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.834242 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn"] Jan 27 22:07:43 crc kubenswrapper[4803]: I0127 22:07:43.852308 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr"] Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.063208 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:07:44 crc kubenswrapper[4803]: E0127 22:07:44.063422 4803 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:44 crc kubenswrapper[4803]: E0127 22:07:44.063505 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert podName:5bedb1c3-9c5a-4137-851d-33b1723a3221 nodeName:}" failed. No retries permitted until 2026-01-27 22:07:46.063484501 +0000 UTC m=+1218.479506200 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" (UID: "5bedb1c3-9c5a-4137-851d-33b1723a3221") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.276931 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" event={"ID":"35783fb5-ef1c-4b33-beb1-af9fee8512d3","Type":"ContainerStarted","Data":"b330e2080fdb72b180dc70378138de04556b1f93195cf00278ae5eb3d676e5f7"} Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.278715 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" event={"ID":"1f1cd413-71e0-443e-95cf-e5d46a745b1b","Type":"ContainerStarted","Data":"f1b8fc7edb3b6ab89a085c544b80db4a469c719836bc9e2ea47a5066c0735ca2"} Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.279954 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" event={"ID":"29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8","Type":"ContainerStarted","Data":"24f469d2e3c11b913216c2ec67e6e097f672fd33c552ff38ca15556dc33b37e9"} Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.281912 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" event={"ID":"47dce22a-001c-4774-ab99-28cd85420e1c","Type":"ContainerStarted","Data":"a05441626078d0bc7608eb7ec624290e9f7158aa2f40e14b170addb6a9ad2d92"} Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.283286 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" event={"ID":"9c6792d4-9d18-4d1c-b855-65aba5ae4919","Type":"ContainerStarted","Data":"914209564ea6fd3627d29cbb523b6eb2d9260a8f2cd40afed14fb15b2547f61b"} Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.285532 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" event={"ID":"f8498dfc-1b67-4783-9389-10d5b30b2860","Type":"ContainerStarted","Data":"6cd6cd2cc0178f18d91a5106b68a82e05f871386f169551edfeb0b1ead73a95b"} Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.288210 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" event={"ID":"c46ecfda-be7b-4f42-9874-a8a94f71188f","Type":"ContainerStarted","Data":"601e02020fb3e27996bf8e39c211076c1f2580035abdb1fa0a732837e7ee6d66"} Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.290598 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" event={"ID":"b6c89c2e-a080-4d20-bc81-bda0f9eb17b6","Type":"ContainerStarted","Data":"8ada9282fea9e70927b322f1cc58cfc4fe6a90a067627a39dafd61d281150aba"} Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.292560 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" event={"ID":"662a79ef-9928-408c-8cfb-62945e0b6725","Type":"ContainerStarted","Data":"3f233818263039fd3d71a866cd76d21296aef979c465f24a55c3adc3a714d360"} Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.372368 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.372469 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:44 crc kubenswrapper[4803]: E0127 22:07:44.372664 4803 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 22:07:44 crc kubenswrapper[4803]: E0127 22:07:44.372723 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:07:46.372705353 +0000 UTC m=+1218.788727052 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "metrics-server-cert" not found Jan 27 22:07:44 crc kubenswrapper[4803]: E0127 22:07:44.373799 4803 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 22:07:44 crc kubenswrapper[4803]: E0127 22:07:44.373911 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:07:46.373881865 +0000 UTC m=+1218.789903564 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "webhook-server-cert" not found Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.386025 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn"] Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.409185 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw"] Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.417377 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-tz8ql"] Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.439300 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7"] Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.446898 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc"] Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.453804 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs"] Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.464893 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh"] Jan 27 22:07:44 crc kubenswrapper[4803]: W0127 22:07:44.492752 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0592ab2d_4ade_4747_a823_73cd5dcac047.slice/crio-2efc0b7f38a545fb64847fde8eb2d5bb5b24bc357f7794ffe5c0ebd1d45a2c0d WatchSource:0}: Error finding container 2efc0b7f38a545fb64847fde8eb2d5bb5b24bc357f7794ffe5c0ebd1d45a2c0d: Status 404 returned error can't find the container with id 2efc0b7f38a545fb64847fde8eb2d5bb5b24bc357f7794ffe5c0ebd1d45a2c0d Jan 27 22:07:44 crc kubenswrapper[4803]: I0127 22:07:44.492813 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl"] Jan 27 22:07:44 crc kubenswrapper[4803]: E0127 22:07:44.504904 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kln8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5g5g7_openstack-operators(293c9c98-184e-45cb-b0be-593f544e49df): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 22:07:44 crc kubenswrapper[4803]: E0127 22:07:44.506907 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7" podUID="293c9c98-184e-45cb-b0be-593f544e49df" Jan 27 22:07:44 crc kubenswrapper[4803]: E0127 22:07:44.517647 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pddgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-qg2hw_openstack-operators(7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 22:07:44 crc kubenswrapper[4803]: E0127 22:07:44.519059 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" podUID="7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79" Jan 27 22:07:45 crc kubenswrapper[4803]: I0127 22:07:45.315512 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" event={"ID":"35742b16-a222-4602-ae0a-d078eafb1ea1","Type":"ContainerStarted","Data":"feb3d762a16e3dcd04fb531e037f403be68b2702327596d7f13bafdfe8dcd6c4"} Jan 27 22:07:45 crc kubenswrapper[4803]: I0127 22:07:45.320895 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" event={"ID":"eae71f44-8628-4436-be64-9ac3aa8f9255","Type":"ContainerStarted","Data":"8f23a4670ca2f70612c494e0a598782c45e8d7d9f3c33a65a4d09bfaa98a42eb"} Jan 27 22:07:45 crc kubenswrapper[4803]: I0127 22:07:45.323709 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" event={"ID":"0592ab2d-4ade-4747-a823-73cd5dcac047","Type":"ContainerStarted","Data":"2efc0b7f38a545fb64847fde8eb2d5bb5b24bc357f7794ffe5c0ebd1d45a2c0d"} Jan 27 22:07:45 crc kubenswrapper[4803]: I0127 22:07:45.334989 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" event={"ID":"7b65a167-f9c8-475c-be5b-39e0502352ab","Type":"ContainerStarted","Data":"b83c608c562dd29e6bb116d0a471dddc2e47b083f9e168876e1c7c2c8d20324b"} Jan 27 22:07:45 crc kubenswrapper[4803]: I0127 22:07:45.337195 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7" event={"ID":"293c9c98-184e-45cb-b0be-593f544e49df","Type":"ContainerStarted","Data":"9f1570b7ac21df41921f4319403c4aa80d4fdef13781b7111f65c62c6af23a88"} Jan 27 22:07:45 crc kubenswrapper[4803]: I0127 22:07:45.342325 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" event={"ID":"57c28f35-52f1-48aa-ad74-3f66a5cdd52c","Type":"ContainerStarted","Data":"12b1cc887c03a01398240ff243b57da2745c2a1b3533dc242dc54d780c6f9146"} Jan 27 22:07:45 crc kubenswrapper[4803]: I0127 22:07:45.347816 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" event={"ID":"7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79","Type":"ContainerStarted","Data":"8893f9af5429b84bad752ff82ef83e14b0598135824ab26a9afa1ac0508b496a"} Jan 27 22:07:45 crc kubenswrapper[4803]: I0127 22:07:45.353238 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" event={"ID":"9dde9803-1302-4f0f-a353-1313e3696d7b","Type":"ContainerStarted","Data":"94567caa6775b84f674070be7985da30d52277c661c03ce46cadf72fa79445b0"} Jan 27 22:07:45 crc kubenswrapper[4803]: E0127 22:07:45.385168 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7" podUID="293c9c98-184e-45cb-b0be-593f544e49df" Jan 27 22:07:45 crc kubenswrapper[4803]: E0127 22:07:45.385178 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" podUID="7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79" Jan 27 22:07:45 crc kubenswrapper[4803]: I0127 22:07:45.605098 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert\") pod \"infra-operator-controller-manager-694cf4f878-nxlck\" (UID: \"e9d93e19-7c2b-4d53-bfe8-7b0157dec931\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:45 crc kubenswrapper[4803]: E0127 22:07:45.605344 4803 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 22:07:45 crc kubenswrapper[4803]: E0127 22:07:45.605391 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert podName:e9d93e19-7c2b-4d53-bfe8-7b0157dec931 nodeName:}" failed. No retries permitted until 2026-01-27 22:07:49.60537803 +0000 UTC m=+1222.021399729 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert") pod "infra-operator-controller-manager-694cf4f878-nxlck" (UID: "e9d93e19-7c2b-4d53-bfe8-7b0157dec931") : secret "infra-operator-webhook-server-cert" not found Jan 27 22:07:46 crc kubenswrapper[4803]: I0127 22:07:46.116754 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:07:46 crc kubenswrapper[4803]: E0127 22:07:46.116971 4803 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:46 crc kubenswrapper[4803]: E0127 22:07:46.117058 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert podName:5bedb1c3-9c5a-4137-851d-33b1723a3221 nodeName:}" failed. No retries permitted until 2026-01-27 22:07:50.117037808 +0000 UTC m=+1222.533059507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" (UID: "5bedb1c3-9c5a-4137-851d-33b1723a3221") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:46 crc kubenswrapper[4803]: E0127 22:07:46.384806 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7" podUID="293c9c98-184e-45cb-b0be-593f544e49df" Jan 27 22:07:46 crc kubenswrapper[4803]: E0127 22:07:46.384987 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" podUID="7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79" Jan 27 22:07:46 crc kubenswrapper[4803]: I0127 22:07:46.422180 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:46 crc kubenswrapper[4803]: E0127 22:07:46.422368 4803 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 22:07:46 crc kubenswrapper[4803]: I0127 22:07:46.422424 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:46 crc kubenswrapper[4803]: E0127 22:07:46.422445 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:07:50.422421777 +0000 UTC m=+1222.838443476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "metrics-server-cert" not found Jan 27 22:07:46 crc kubenswrapper[4803]: E0127 22:07:46.422986 4803 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 22:07:46 crc kubenswrapper[4803]: E0127 22:07:46.423071 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:07:50.423050534 +0000 UTC m=+1222.839072233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "webhook-server-cert" not found Jan 27 22:07:49 crc kubenswrapper[4803]: I0127 22:07:49.607354 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert\") pod \"infra-operator-controller-manager-694cf4f878-nxlck\" (UID: \"e9d93e19-7c2b-4d53-bfe8-7b0157dec931\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:49 crc kubenswrapper[4803]: E0127 22:07:49.607573 4803 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 22:07:49 crc kubenswrapper[4803]: E0127 22:07:49.607764 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert podName:e9d93e19-7c2b-4d53-bfe8-7b0157dec931 nodeName:}" failed. No retries permitted until 2026-01-27 22:07:57.607749232 +0000 UTC m=+1230.023770931 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert") pod "infra-operator-controller-manager-694cf4f878-nxlck" (UID: "e9d93e19-7c2b-4d53-bfe8-7b0157dec931") : secret "infra-operator-webhook-server-cert" not found Jan 27 22:07:50 crc kubenswrapper[4803]: I0127 22:07:50.121598 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:07:50 crc kubenswrapper[4803]: E0127 22:07:50.121797 4803 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:50 crc kubenswrapper[4803]: E0127 22:07:50.121922 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert podName:5bedb1c3-9c5a-4137-851d-33b1723a3221 nodeName:}" failed. No retries permitted until 2026-01-27 22:07:58.121903808 +0000 UTC m=+1230.537925507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" (UID: "5bedb1c3-9c5a-4137-851d-33b1723a3221") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:50 crc kubenswrapper[4803]: I0127 22:07:50.426782 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:50 crc kubenswrapper[4803]: I0127 22:07:50.426864 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:50 crc kubenswrapper[4803]: E0127 22:07:50.427731 4803 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 22:07:50 crc kubenswrapper[4803]: E0127 22:07:50.427787 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:07:58.42777098 +0000 UTC m=+1230.843792679 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "webhook-server-cert" not found Jan 27 22:07:50 crc kubenswrapper[4803]: E0127 22:07:50.427732 4803 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 22:07:50 crc kubenswrapper[4803]: E0127 22:07:50.428462 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:07:58.428452868 +0000 UTC m=+1230.844474567 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "metrics-server-cert" not found Jan 27 22:07:56 crc kubenswrapper[4803]: E0127 22:07:56.737235 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 27 22:07:56 crc kubenswrapper[4803]: E0127 22:07:56.738750 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqppv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-26gcs_openstack-operators(35783fb5-ef1c-4b33-beb1-af9fee8512d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:07:56 crc kubenswrapper[4803]: E0127 22:07:56.739938 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" podUID="35783fb5-ef1c-4b33-beb1-af9fee8512d3" Jan 27 22:07:57 crc kubenswrapper[4803]: E0127 22:07:57.282896 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 27 22:07:57 crc kubenswrapper[4803]: E0127 22:07:57.283081 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2rhsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-w8nw7_openstack-operators(29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:07:57 crc kubenswrapper[4803]: E0127 22:07:57.284321 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" podUID="29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8" Jan 27 22:07:57 crc kubenswrapper[4803]: E0127 22:07:57.441347 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" podUID="35783fb5-ef1c-4b33-beb1-af9fee8512d3" Jan 27 22:07:57 crc kubenswrapper[4803]: E0127 22:07:57.443686 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" podUID="29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8" Jan 27 22:07:57 crc kubenswrapper[4803]: I0127 22:07:57.655604 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert\") pod \"infra-operator-controller-manager-694cf4f878-nxlck\" (UID: \"e9d93e19-7c2b-4d53-bfe8-7b0157dec931\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:57 crc kubenswrapper[4803]: I0127 22:07:57.670859 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9d93e19-7c2b-4d53-bfe8-7b0157dec931-cert\") pod \"infra-operator-controller-manager-694cf4f878-nxlck\" (UID: \"e9d93e19-7c2b-4d53-bfe8-7b0157dec931\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:57 crc kubenswrapper[4803]: I0127 22:07:57.689366 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:07:58 crc kubenswrapper[4803]: I0127 22:07:58.163763 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:07:58 crc kubenswrapper[4803]: E0127 22:07:58.163974 4803 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:58 crc kubenswrapper[4803]: E0127 22:07:58.164341 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert podName:5bedb1c3-9c5a-4137-851d-33b1723a3221 nodeName:}" failed. No retries permitted until 2026-01-27 22:08:14.164322667 +0000 UTC m=+1246.580344366 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" (UID: "5bedb1c3-9c5a-4137-851d-33b1723a3221") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 22:07:58 crc kubenswrapper[4803]: I0127 22:07:58.471615 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:58 crc kubenswrapper[4803]: I0127 22:07:58.471683 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:07:58 crc kubenswrapper[4803]: E0127 22:07:58.471816 4803 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 22:07:58 crc kubenswrapper[4803]: E0127 22:07:58.471864 4803 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 22:07:58 crc kubenswrapper[4803]: E0127 22:07:58.471885 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:08:14.471870774 +0000 UTC m=+1246.887892473 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "metrics-server-cert" not found Jan 27 22:07:58 crc kubenswrapper[4803]: E0127 22:07:58.471994 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs podName:62a498d3-45eb-4117-ba22-041e8d90762d nodeName:}" failed. No retries permitted until 2026-01-27 22:08:14.471966837 +0000 UTC m=+1246.887988576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs") pod "openstack-operator-controller-manager-64f565f6ff-2xjcl" (UID: "62a498d3-45eb-4117-ba22-041e8d90762d") : secret "webhook-server-cert" not found Jan 27 22:08:04 crc kubenswrapper[4803]: E0127 22:08:04.698820 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7" Jan 27 22:08:04 crc kubenswrapper[4803]: E0127 22:08:04.699544 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rf4f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7478f7dbf9-t9ng6_openstack-operators(47dce22a-001c-4774-ab99-28cd85420e1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:08:04 crc kubenswrapper[4803]: E0127 22:08:04.700763 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" podUID="47dce22a-001c-4774-ab99-28cd85420e1c" Jan 27 22:08:05 crc kubenswrapper[4803]: E0127 22:08:05.292915 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 27 22:08:05 crc kubenswrapper[4803]: E0127 22:08:05.293110 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4v2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-prltl_openstack-operators(35742b16-a222-4602-ae0a-d078eafb1ea1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:08:05 crc kubenswrapper[4803]: E0127 22:08:05.294288 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" podUID="35742b16-a222-4602-ae0a-d078eafb1ea1" Jan 27 22:08:05 crc kubenswrapper[4803]: E0127 22:08:05.517696 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" podUID="35742b16-a222-4602-ae0a-d078eafb1ea1" Jan 27 22:08:05 crc kubenswrapper[4803]: E0127 22:08:05.517763 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" podUID="47dce22a-001c-4774-ab99-28cd85420e1c" Jan 27 22:08:05 crc kubenswrapper[4803]: E0127 22:08:05.935532 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 27 22:08:05 crc kubenswrapper[4803]: E0127 22:08:05.935720 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vsvjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-t9zrn_openstack-operators(c46ecfda-be7b-4f42-9874-a8a94f71188f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:08:05 crc kubenswrapper[4803]: E0127 22:08:05.936979 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" podUID="c46ecfda-be7b-4f42-9874-a8a94f71188f" Jan 27 22:08:06 crc kubenswrapper[4803]: E0127 22:08:06.525948 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" podUID="c46ecfda-be7b-4f42-9874-a8a94f71188f" Jan 27 22:08:08 crc kubenswrapper[4803]: E0127 22:08:08.118437 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 27 22:08:08 crc kubenswrapper[4803]: E0127 22:08:08.118993 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q94wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-hcwxh_openstack-operators(0592ab2d-4ade-4747-a823-73cd5dcac047): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:08:08 crc kubenswrapper[4803]: E0127 22:08:08.120434 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" podUID="0592ab2d-4ade-4747-a823-73cd5dcac047" Jan 27 22:08:08 crc kubenswrapper[4803]: E0127 22:08:08.539862 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" podUID="0592ab2d-4ade-4747-a823-73cd5dcac047" Jan 27 22:08:08 crc kubenswrapper[4803]: E0127 22:08:08.683381 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 27 22:08:08 crc kubenswrapper[4803]: E0127 22:08:08.684003 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6rwcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-9hlvn_openstack-operators(7b65a167-f9c8-475c-be5b-39e0502352ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:08:08 crc kubenswrapper[4803]: E0127 22:08:08.686287 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" podUID="7b65a167-f9c8-475c-be5b-39e0502352ab" Jan 27 22:08:09 crc kubenswrapper[4803]: E0127 22:08:09.544960 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" podUID="7b65a167-f9c8-475c-be5b-39e0502352ab" Jan 27 22:08:11 crc kubenswrapper[4803]: E0127 22:08:11.256881 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 27 22:08:11 crc kubenswrapper[4803]: E0127 22:08:11.257064 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6sr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-4rzpc_openstack-operators(eae71f44-8628-4436-be64-9ac3aa8f9255): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:08:11 crc kubenswrapper[4803]: E0127 22:08:11.258431 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" podUID="eae71f44-8628-4436-be64-9ac3aa8f9255" Jan 27 22:08:11 crc kubenswrapper[4803]: E0127 22:08:11.320801 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79" Jan 27 22:08:11 crc kubenswrapper[4803]: E0127 22:08:11.320945 4803 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79" Jan 27 22:08:11 crc kubenswrapper[4803]: E0127 22:08:11.321130 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cst6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7948f6cfb4-mpkbs_openstack-operators(9dde9803-1302-4f0f-a353-1313e3696d7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:08:11 crc kubenswrapper[4803]: E0127 22:08:11.322409 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" podUID="9dde9803-1302-4f0f-a353-1313e3696d7b" Jan 27 22:08:11 crc kubenswrapper[4803]: E0127 22:08:11.562522 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" podUID="eae71f44-8628-4436-be64-9ac3aa8f9255" Jan 27 22:08:11 crc kubenswrapper[4803]: E0127 22:08:11.562996 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" podUID="9dde9803-1302-4f0f-a353-1313e3696d7b" Jan 27 22:08:13 crc kubenswrapper[4803]: E0127 22:08:13.671161 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 27 22:08:13 crc kubenswrapper[4803]: E0127 22:08:13.671648 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqjnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-gst8v_openstack-operators(b6c89c2e-a080-4d20-bc81-bda0f9eb17b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:08:13 crc kubenswrapper[4803]: E0127 22:08:13.672881 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" podUID="b6c89c2e-a080-4d20-bc81-bda0f9eb17b6" Jan 27 22:08:14 crc kubenswrapper[4803]: E0127 22:08:14.117736 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 27 22:08:14 crc kubenswrapper[4803]: E0127 22:08:14.117906 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8ctb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-r5dqr_openstack-operators(1f1cd413-71e0-443e-95cf-e5d46a745b1b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:08:14 crc kubenswrapper[4803]: E0127 22:08:14.120342 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" podUID="1f1cd413-71e0-443e-95cf-e5d46a745b1b" Jan 27 22:08:14 crc kubenswrapper[4803]: I0127 22:08:14.250641 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:08:14 crc kubenswrapper[4803]: I0127 22:08:14.260248 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bedb1c3-9c5a-4137-851d-33b1723a3221-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt\" (UID: \"5bedb1c3-9c5a-4137-851d-33b1723a3221\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:08:14 crc kubenswrapper[4803]: I0127 22:08:14.397266 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:08:14 crc kubenswrapper[4803]: I0127 22:08:14.541638 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck"] Jan 27 22:08:14 crc kubenswrapper[4803]: I0127 22:08:14.557211 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:08:14 crc kubenswrapper[4803]: I0127 22:08:14.557268 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:08:14 crc kubenswrapper[4803]: I0127 22:08:14.561081 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-webhook-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:08:14 crc kubenswrapper[4803]: I0127 22:08:14.561553 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a498d3-45eb-4117-ba22-041e8d90762d-metrics-certs\") pod \"openstack-operator-controller-manager-64f565f6ff-2xjcl\" (UID: \"62a498d3-45eb-4117-ba22-041e8d90762d\") " pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:08:14 crc kubenswrapper[4803]: E0127 22:08:14.591013 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" podUID="b6c89c2e-a080-4d20-bc81-bda0f9eb17b6" Jan 27 22:08:14 crc kubenswrapper[4803]: E0127 22:08:14.591247 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" podUID="1f1cd413-71e0-443e-95cf-e5d46a745b1b" Jan 27 22:08:14 crc kubenswrapper[4803]: I0127 22:08:14.791693 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.270591 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt"] Jan 27 22:08:15 crc kubenswrapper[4803]: W0127 22:08:15.293128 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bedb1c3_9c5a_4137_851d_33b1723a3221.slice/crio-4f66ab30b39c8bd2820671ec344ba9870bfe90e8ca032de86adfd6501e2108d1 WatchSource:0}: Error finding container 4f66ab30b39c8bd2820671ec344ba9870bfe90e8ca032de86adfd6501e2108d1: Status 404 returned error can't find the container with id 4f66ab30b39c8bd2820671ec344ba9870bfe90e8ca032de86adfd6501e2108d1 Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.598322 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl"] Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.603400 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" event={"ID":"57c28f35-52f1-48aa-ad74-3f66a5cdd52c","Type":"ContainerStarted","Data":"16a1f903c50b3c403b22ec000b847b2519ad4ad6ce01753bfec751cebe9c9a6e"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.603485 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" Jan 27 22:08:15 crc kubenswrapper[4803]: W0127 22:08:15.621685 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a498d3_45eb_4117_ba22_041e8d90762d.slice/crio-148c047aaccf470ee9b980931f6db0c7116e4b52ab236f33b5298ec0b95b31ff WatchSource:0}: Error finding container 148c047aaccf470ee9b980931f6db0c7116e4b52ab236f33b5298ec0b95b31ff: Status 404 returned error can't find the container with id 148c047aaccf470ee9b980931f6db0c7116e4b52ab236f33b5298ec0b95b31ff Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.621837 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" event={"ID":"7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79","Type":"ContainerStarted","Data":"3771eb7ec233067d01cce0bdf1337e910915fcd4804be553d6224ba1157c2425"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.622093 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.632004 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" event={"ID":"51221b4b-024e-4134-8baa-a9478c8c596a","Type":"ContainerStarted","Data":"aa5988328c75185fe39e00f8cead6b603a9f92a9b2ba43c311ac7352dda167ce"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.632117 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.638090 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" podStartSLOduration=6.24428852 podStartE2EDuration="33.638072044s" podCreationTimestamp="2026-01-27 22:07:42 +0000 UTC" firstStartedPulling="2026-01-27 22:07:44.495254855 +0000 UTC m=+1216.911276554" lastFinishedPulling="2026-01-27 22:08:11.889038379 +0000 UTC m=+1244.305060078" observedRunningTime="2026-01-27 22:08:15.637339134 +0000 UTC m=+1248.053360833" watchObservedRunningTime="2026-01-27 22:08:15.638072044 +0000 UTC m=+1248.054093743" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.648505 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" event={"ID":"eac7ef2c-904d-429b-ac3f-a43a72339fde","Type":"ContainerStarted","Data":"5134fe787f052582ae1c5f2d9794e69ae0b438e6100556b96462f1c2ee585d45"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.648876 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.661001 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" event={"ID":"662a79ef-9928-408c-8cfb-62945e0b6725","Type":"ContainerStarted","Data":"2e8a607a85fbdc4afacdc34b67c3f6216378f1f526ad9c3e42317ddc1235bfac"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.661868 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.664053 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" event={"ID":"5bedb1c3-9c5a-4137-851d-33b1723a3221","Type":"ContainerStarted","Data":"4f66ab30b39c8bd2820671ec344ba9870bfe90e8ca032de86adfd6501e2108d1"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.666742 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" event={"ID":"35783fb5-ef1c-4b33-beb1-af9fee8512d3","Type":"ContainerStarted","Data":"332946a3bd66cd6d7a098f430c072b436ca67d0c8d87d4452f90c9b5e20746ac"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.667503 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.669655 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" event={"ID":"29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8","Type":"ContainerStarted","Data":"df02a4c3380279d3709003353407a7653d1959eb8de8c092c62a218cbb41fc36"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.670457 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.674400 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" event={"ID":"c6f78887-1cda-463f-ab3f-57703bfb7a41","Type":"ContainerStarted","Data":"cf73f76c9b502f8cb3e8cd810ffc7b5b8343cfeed608cb06634b84912ae4ae21"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.674617 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.687233 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" event={"ID":"e9d93e19-7c2b-4d53-bfe8-7b0157dec931","Type":"ContainerStarted","Data":"5ae37db6438aff6dcaa5979a80918591799db8a32bf7ba3288241ab6f9bc0581"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.688768 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7" event={"ID":"293c9c98-184e-45cb-b0be-593f544e49df","Type":"ContainerStarted","Data":"dada362780ce2bba6d232b5db45f5a4ec603ca1df70ab92e0e201281701c3d48"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.696523 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" event={"ID":"9c6792d4-9d18-4d1c-b855-65aba5ae4919","Type":"ContainerStarted","Data":"c9df8a0eb28783a7f5b364c6aa13461a2a20b3f0a58f4642e20e94988dc0a957"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.697060 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.699460 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" podStartSLOduration=4.767136519 podStartE2EDuration="34.699443608s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:42.964740263 +0000 UTC m=+1215.380761962" lastFinishedPulling="2026-01-27 22:08:12.897047342 +0000 UTC m=+1245.313069051" observedRunningTime="2026-01-27 22:08:15.674170637 +0000 UTC m=+1248.090192336" watchObservedRunningTime="2026-01-27 22:08:15.699443608 +0000 UTC m=+1248.115465307" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.700571 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" podStartSLOduration=4.501537411 podStartE2EDuration="34.700563067s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:44.517498425 +0000 UTC m=+1216.933520124" lastFinishedPulling="2026-01-27 22:08:14.716524081 +0000 UTC m=+1247.132545780" observedRunningTime="2026-01-27 22:08:15.697900676 +0000 UTC m=+1248.113922385" watchObservedRunningTime="2026-01-27 22:08:15.700563067 +0000 UTC m=+1248.116584766" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.701724 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" event={"ID":"f8498dfc-1b67-4783-9389-10d5b30b2860","Type":"ContainerStarted","Data":"948f488f78855df8da62f0f21630dbaf689211511c0157dc482e12cbbcea6c50"} Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.701896 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.727615 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" podStartSLOduration=5.810999018 podStartE2EDuration="34.727594976s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:42.971194396 +0000 UTC m=+1215.387216095" lastFinishedPulling="2026-01-27 22:08:11.887790354 +0000 UTC m=+1244.303812053" observedRunningTime="2026-01-27 22:08:15.715060528 +0000 UTC m=+1248.131082227" watchObservedRunningTime="2026-01-27 22:08:15.727594976 +0000 UTC m=+1248.143616675" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.743298 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" podStartSLOduration=3.816751208 podStartE2EDuration="34.743278779s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:43.789508137 +0000 UTC m=+1216.205529836" lastFinishedPulling="2026-01-27 22:08:14.716035708 +0000 UTC m=+1247.132057407" observedRunningTime="2026-01-27 22:08:15.73664852 +0000 UTC m=+1248.152670229" watchObservedRunningTime="2026-01-27 22:08:15.743278779 +0000 UTC m=+1248.159300478" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.794022 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" podStartSLOduration=5.7308582789999996 podStartE2EDuration="34.793933234s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:42.827866224 +0000 UTC m=+1215.243887923" lastFinishedPulling="2026-01-27 22:08:11.890941179 +0000 UTC m=+1244.306962878" observedRunningTime="2026-01-27 22:08:15.772141527 +0000 UTC m=+1248.188163226" watchObservedRunningTime="2026-01-27 22:08:15.793933234 +0000 UTC m=+1248.209954953" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.820360 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" podStartSLOduration=6.818298591 podStartE2EDuration="34.820335485s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:43.377378532 +0000 UTC m=+1215.793400231" lastFinishedPulling="2026-01-27 22:08:11.379415426 +0000 UTC m=+1243.795437125" observedRunningTime="2026-01-27 22:08:15.799243127 +0000 UTC m=+1248.215264826" watchObservedRunningTime="2026-01-27 22:08:15.820335485 +0000 UTC m=+1248.236357184" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.868504 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" podStartSLOduration=3.955307642 podStartE2EDuration="34.868489153s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:43.802997071 +0000 UTC m=+1216.219018770" lastFinishedPulling="2026-01-27 22:08:14.716178582 +0000 UTC m=+1247.132200281" observedRunningTime="2026-01-27 22:08:15.868055421 +0000 UTC m=+1248.284077120" watchObservedRunningTime="2026-01-27 22:08:15.868489153 +0000 UTC m=+1248.284510852" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.872273 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5g5g7" podStartSLOduration=3.5548237670000002 podStartE2EDuration="33.872259685s" podCreationTimestamp="2026-01-27 22:07:42 +0000 UTC" firstStartedPulling="2026-01-27 22:07:44.504772451 +0000 UTC m=+1216.920794140" lastFinishedPulling="2026-01-27 22:08:14.822208359 +0000 UTC m=+1247.238230058" observedRunningTime="2026-01-27 22:08:15.837105917 +0000 UTC m=+1248.253127616" watchObservedRunningTime="2026-01-27 22:08:15.872259685 +0000 UTC m=+1248.288281384" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.938729 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" podStartSLOduration=6.845749432 podStartE2EDuration="34.938708225s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:43.794814311 +0000 UTC m=+1216.210836010" lastFinishedPulling="2026-01-27 22:08:11.887773104 +0000 UTC m=+1244.303794803" observedRunningTime="2026-01-27 22:08:15.897762872 +0000 UTC m=+1248.313784571" watchObservedRunningTime="2026-01-27 22:08:15.938708225 +0000 UTC m=+1248.354729924" Jan 27 22:08:15 crc kubenswrapper[4803]: I0127 22:08:15.963764 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" podStartSLOduration=6.447850039 podStartE2EDuration="34.96374342s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:43.373166388 +0000 UTC m=+1215.789188087" lastFinishedPulling="2026-01-27 22:08:11.889059769 +0000 UTC m=+1244.305081468" observedRunningTime="2026-01-27 22:08:15.921321227 +0000 UTC m=+1248.337342926" watchObservedRunningTime="2026-01-27 22:08:15.96374342 +0000 UTC m=+1248.379765119" Jan 27 22:08:16 crc kubenswrapper[4803]: I0127 22:08:16.344231 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:08:16 crc kubenswrapper[4803]: I0127 22:08:16.344536 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:08:16 crc kubenswrapper[4803]: I0127 22:08:16.713067 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" event={"ID":"62a498d3-45eb-4117-ba22-041e8d90762d","Type":"ContainerStarted","Data":"148c047aaccf470ee9b980931f6db0c7116e4b52ab236f33b5298ec0b95b31ff"} Jan 27 22:08:18 crc kubenswrapper[4803]: I0127 22:08:18.729537 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" event={"ID":"62a498d3-45eb-4117-ba22-041e8d90762d","Type":"ContainerStarted","Data":"d50a630260806b1eed5daa9ed845700e3a4a58e3d7c9164fe00e3fcd63e6e636"} Jan 27 22:08:18 crc kubenswrapper[4803]: I0127 22:08:18.729678 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:08:18 crc kubenswrapper[4803]: I0127 22:08:18.759033 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" podStartSLOduration=36.759013284 podStartE2EDuration="36.759013284s" podCreationTimestamp="2026-01-27 22:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:08:18.758518241 +0000 UTC m=+1251.174539940" watchObservedRunningTime="2026-01-27 22:08:18.759013284 +0000 UTC m=+1251.175034973" Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.744104 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" event={"ID":"e9d93e19-7c2b-4d53-bfe8-7b0157dec931","Type":"ContainerStarted","Data":"e11d114c4308fae5cc90e124a41f7760435118ce5bcdeb4e836c101e81140bbd"} Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.745816 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.747440 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" event={"ID":"47dce22a-001c-4774-ab99-28cd85420e1c","Type":"ContainerStarted","Data":"74dcdde3ae4dd10af4db49b00e0504855c9d4f1359a2d506dae997c06515dd99"} Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.747723 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.748776 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" event={"ID":"35742b16-a222-4602-ae0a-d078eafb1ea1","Type":"ContainerStarted","Data":"07b05b65f068e2ff2f38a5d13c80698149ce699ab9746c2f53453dd3445b9771"} Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.748912 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.749941 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" event={"ID":"5bedb1c3-9c5a-4137-851d-33b1723a3221","Type":"ContainerStarted","Data":"1943fa1831b28dcb16a3c0da317dd192683eff0cc2a63cd98c4b4b469583a041"} Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.750158 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.763655 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" podStartSLOduration=34.337130815 podStartE2EDuration="39.763640614s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:08:14.750969519 +0000 UTC m=+1247.166991218" lastFinishedPulling="2026-01-27 22:08:20.177479328 +0000 UTC m=+1252.593501017" observedRunningTime="2026-01-27 22:08:20.759666856 +0000 UTC m=+1253.175688555" watchObservedRunningTime="2026-01-27 22:08:20.763640614 +0000 UTC m=+1253.179662313" Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.773487 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" podStartSLOduration=2.9738866269999997 podStartE2EDuration="39.773458378s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:43.377714801 +0000 UTC m=+1215.793736500" lastFinishedPulling="2026-01-27 22:08:20.177286552 +0000 UTC m=+1252.593308251" observedRunningTime="2026-01-27 22:08:20.772627096 +0000 UTC m=+1253.188648795" watchObservedRunningTime="2026-01-27 22:08:20.773458378 +0000 UTC m=+1253.189480077" Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.836919 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" podStartSLOduration=4.157530693 podStartE2EDuration="39.836895958s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:44.498244086 +0000 UTC m=+1216.914265785" lastFinishedPulling="2026-01-27 22:08:20.177609351 +0000 UTC m=+1252.593631050" observedRunningTime="2026-01-27 22:08:20.828972824 +0000 UTC m=+1253.244994523" watchObservedRunningTime="2026-01-27 22:08:20.836895958 +0000 UTC m=+1253.252917677" Jan 27 22:08:20 crc kubenswrapper[4803]: I0127 22:08:20.837658 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" podStartSLOduration=34.926456076 podStartE2EDuration="39.837650618s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:08:15.303991452 +0000 UTC m=+1247.720013151" lastFinishedPulling="2026-01-27 22:08:20.215185994 +0000 UTC m=+1252.631207693" observedRunningTime="2026-01-27 22:08:20.811095613 +0000 UTC m=+1253.227117392" watchObservedRunningTime="2026-01-27 22:08:20.837650618 +0000 UTC m=+1253.253672317" Jan 27 22:08:21 crc kubenswrapper[4803]: I0127 22:08:21.826800 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" Jan 27 22:08:21 crc kubenswrapper[4803]: I0127 22:08:21.890177 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" Jan 27 22:08:21 crc kubenswrapper[4803]: I0127 22:08:21.898154 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" Jan 27 22:08:21 crc kubenswrapper[4803]: I0127 22:08:21.962366 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" Jan 27 22:08:22 crc kubenswrapper[4803]: I0127 22:08:22.080918 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" Jan 27 22:08:22 crc kubenswrapper[4803]: I0127 22:08:22.117873 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" Jan 27 22:08:22 crc kubenswrapper[4803]: I0127 22:08:22.320940 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" Jan 27 22:08:22 crc kubenswrapper[4803]: I0127 22:08:22.347358 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" Jan 27 22:08:22 crc kubenswrapper[4803]: I0127 22:08:22.498381 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" Jan 27 22:08:22 crc kubenswrapper[4803]: I0127 22:08:22.766131 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" event={"ID":"0592ab2d-4ade-4747-a823-73cd5dcac047","Type":"ContainerStarted","Data":"1a3aeb39e97105346886d6cb583149635c0a6b1d687ca31098e9174e7cdb7ab6"} Jan 27 22:08:22 crc kubenswrapper[4803]: I0127 22:08:22.766639 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" Jan 27 22:08:22 crc kubenswrapper[4803]: I0127 22:08:22.788147 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" podStartSLOduration=4.257825774 podStartE2EDuration="41.788129907s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:44.503074316 +0000 UTC m=+1216.919096015" lastFinishedPulling="2026-01-27 22:08:22.033378449 +0000 UTC m=+1254.449400148" observedRunningTime="2026-01-27 22:08:22.78079261 +0000 UTC m=+1255.196814329" watchObservedRunningTime="2026-01-27 22:08:22.788129907 +0000 UTC m=+1255.204151606" Jan 27 22:08:22 crc kubenswrapper[4803]: I0127 22:08:22.902516 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" Jan 27 22:08:23 crc kubenswrapper[4803]: I0127 22:08:23.775302 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" event={"ID":"7b65a167-f9c8-475c-be5b-39e0502352ab","Type":"ContainerStarted","Data":"c02d9a673890150cc1c06b7c1cc82a219481f9b130d1be0354ed61f62753a909"} Jan 27 22:08:23 crc kubenswrapper[4803]: I0127 22:08:23.775870 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" Jan 27 22:08:23 crc kubenswrapper[4803]: I0127 22:08:23.777017 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" event={"ID":"c46ecfda-be7b-4f42-9874-a8a94f71188f","Type":"ContainerStarted","Data":"19682635f24e08c3a30b209f2a4f569779b0a3a08e01e3863e71a91696950a6b"} Jan 27 22:08:23 crc kubenswrapper[4803]: I0127 22:08:23.777373 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" Jan 27 22:08:23 crc kubenswrapper[4803]: I0127 22:08:23.778912 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" event={"ID":"9dde9803-1302-4f0f-a353-1313e3696d7b","Type":"ContainerStarted","Data":"43add3c30b09970dc8fc812811fdb00f0433d9ee33d1d1fc3429d70212feb9bd"} Jan 27 22:08:23 crc kubenswrapper[4803]: I0127 22:08:23.779243 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" Jan 27 22:08:23 crc kubenswrapper[4803]: I0127 22:08:23.803478 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" podStartSLOduration=3.333325999 podStartE2EDuration="41.803453858s" podCreationTimestamp="2026-01-27 22:07:42 +0000 UTC" firstStartedPulling="2026-01-27 22:07:44.406590866 +0000 UTC m=+1216.822612565" lastFinishedPulling="2026-01-27 22:08:22.876718725 +0000 UTC m=+1255.292740424" observedRunningTime="2026-01-27 22:08:23.794817245 +0000 UTC m=+1256.210838964" watchObservedRunningTime="2026-01-27 22:08:23.803453858 +0000 UTC m=+1256.219475567" Jan 27 22:08:23 crc kubenswrapper[4803]: I0127 22:08:23.817600 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" podStartSLOduration=3.792764652 podStartE2EDuration="42.817582169s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:43.810184054 +0000 UTC m=+1216.226205753" lastFinishedPulling="2026-01-27 22:08:22.835001571 +0000 UTC m=+1255.251023270" observedRunningTime="2026-01-27 22:08:23.812103481 +0000 UTC m=+1256.228125180" watchObservedRunningTime="2026-01-27 22:08:23.817582169 +0000 UTC m=+1256.233603868" Jan 27 22:08:23 crc kubenswrapper[4803]: I0127 22:08:23.835580 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" podStartSLOduration=3.956617588 podStartE2EDuration="42.835564823s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:44.494975288 +0000 UTC m=+1216.910996987" lastFinishedPulling="2026-01-27 22:08:23.373922513 +0000 UTC m=+1255.789944222" observedRunningTime="2026-01-27 22:08:23.831821492 +0000 UTC m=+1256.247843191" watchObservedRunningTime="2026-01-27 22:08:23.835564823 +0000 UTC m=+1256.251586522" Jan 27 22:08:24 crc kubenswrapper[4803]: I0127 22:08:24.798375 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 22:08:26 crc kubenswrapper[4803]: I0127 22:08:26.830583 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" event={"ID":"eae71f44-8628-4436-be64-9ac3aa8f9255","Type":"ContainerStarted","Data":"81d662bbd187cdb53ef687289504df196e6cd65b7ab5da1c5a02a098256eed1a"} Jan 27 22:08:26 crc kubenswrapper[4803]: I0127 22:08:26.831249 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" Jan 27 22:08:26 crc kubenswrapper[4803]: I0127 22:08:26.848528 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" podStartSLOduration=4.57643319 podStartE2EDuration="45.848509633s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:44.503502918 +0000 UTC m=+1216.919524617" lastFinishedPulling="2026-01-27 22:08:25.775579361 +0000 UTC m=+1258.191601060" observedRunningTime="2026-01-27 22:08:26.843404906 +0000 UTC m=+1259.259426605" watchObservedRunningTime="2026-01-27 22:08:26.848509633 +0000 UTC m=+1259.264531332" Jan 27 22:08:27 crc kubenswrapper[4803]: I0127 22:08:27.695694 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 22:08:28 crc kubenswrapper[4803]: I0127 22:08:28.846862 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" event={"ID":"b6c89c2e-a080-4d20-bc81-bda0f9eb17b6","Type":"ContainerStarted","Data":"3eb637d3a162c905120e6db8c58f1592726d54d1112a041b04bf066b02c6455c"} Jan 27 22:08:28 crc kubenswrapper[4803]: I0127 22:08:28.849372 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" Jan 27 22:08:28 crc kubenswrapper[4803]: I0127 22:08:28.870056 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" podStartSLOduration=3.816881252 podStartE2EDuration="47.870038717s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:43.783185807 +0000 UTC m=+1216.199207506" lastFinishedPulling="2026-01-27 22:08:27.836343282 +0000 UTC m=+1260.252364971" observedRunningTime="2026-01-27 22:08:28.863027889 +0000 UTC m=+1261.279049588" watchObservedRunningTime="2026-01-27 22:08:28.870038717 +0000 UTC m=+1261.286060416" Jan 27 22:08:30 crc kubenswrapper[4803]: I0127 22:08:30.861443 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" event={"ID":"1f1cd413-71e0-443e-95cf-e5d46a745b1b","Type":"ContainerStarted","Data":"63c4ea916498bfd2b07897cc5a772a9eabdb7685b32dc7f4c81bd26b3d606003"} Jan 27 22:08:30 crc kubenswrapper[4803]: I0127 22:08:30.862289 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" Jan 27 22:08:30 crc kubenswrapper[4803]: I0127 22:08:30.884342 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" podStartSLOduration=3.845620537 podStartE2EDuration="49.884324326s" podCreationTimestamp="2026-01-27 22:07:41 +0000 UTC" firstStartedPulling="2026-01-27 22:07:43.803235058 +0000 UTC m=+1216.219256757" lastFinishedPulling="2026-01-27 22:08:29.841938847 +0000 UTC m=+1262.257960546" observedRunningTime="2026-01-27 22:08:30.877195625 +0000 UTC m=+1263.293217364" watchObservedRunningTime="2026-01-27 22:08:30.884324326 +0000 UTC m=+1263.300346025" Jan 27 22:08:32 crc kubenswrapper[4803]: I0127 22:08:32.138686 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" Jan 27 22:08:32 crc kubenswrapper[4803]: I0127 22:08:32.425996 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" Jan 27 22:08:32 crc kubenswrapper[4803]: I0127 22:08:32.659926 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" Jan 27 22:08:32 crc kubenswrapper[4803]: I0127 22:08:32.695454 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" Jan 27 22:08:32 crc kubenswrapper[4803]: I0127 22:08:32.761491 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" Jan 27 22:08:32 crc kubenswrapper[4803]: I0127 22:08:32.810553 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" Jan 27 22:08:32 crc kubenswrapper[4803]: I0127 22:08:32.856609 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" Jan 27 22:08:34 crc kubenswrapper[4803]: I0127 22:08:34.403894 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 22:08:42 crc kubenswrapper[4803]: I0127 22:08:42.318629 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" Jan 27 22:08:42 crc kubenswrapper[4803]: I0127 22:08:42.470332 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" Jan 27 22:08:46 crc kubenswrapper[4803]: I0127 22:08:46.343812 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:08:46 crc kubenswrapper[4803]: I0127 22:08:46.344483 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.473989 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h2p88"] Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.479627 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.486000 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.486344 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.486466 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-ctx6r" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.487215 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.503072 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h2p88"] Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.585906 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8lb9m"] Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.587865 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.590657 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.627808 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8lb9m"] Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.634274 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw7qs\" (UniqueName: \"kubernetes.io/projected/30665406-a35a-42a3-b979-45e64be7e47c-kube-api-access-cw7qs\") pod \"dnsmasq-dns-675f4bcbfc-h2p88\" (UID: \"30665406-a35a-42a3-b979-45e64be7e47c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.634354 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30665406-a35a-42a3-b979-45e64be7e47c-config\") pod \"dnsmasq-dns-675f4bcbfc-h2p88\" (UID: \"30665406-a35a-42a3-b979-45e64be7e47c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.736280 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn5hg\" (UniqueName: \"kubernetes.io/projected/6079ed9d-a8d5-43d9-955b-f165e96ac559-kube-api-access-sn5hg\") pod \"dnsmasq-dns-78dd6ddcc-8lb9m\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.736396 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw7qs\" (UniqueName: \"kubernetes.io/projected/30665406-a35a-42a3-b979-45e64be7e47c-kube-api-access-cw7qs\") pod \"dnsmasq-dns-675f4bcbfc-h2p88\" (UID: \"30665406-a35a-42a3-b979-45e64be7e47c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.736457 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30665406-a35a-42a3-b979-45e64be7e47c-config\") pod \"dnsmasq-dns-675f4bcbfc-h2p88\" (UID: \"30665406-a35a-42a3-b979-45e64be7e47c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.736482 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-config\") pod \"dnsmasq-dns-78dd6ddcc-8lb9m\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.736509 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8lb9m\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.737566 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30665406-a35a-42a3-b979-45e64be7e47c-config\") pod \"dnsmasq-dns-675f4bcbfc-h2p88\" (UID: \"30665406-a35a-42a3-b979-45e64be7e47c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.754465 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw7qs\" (UniqueName: \"kubernetes.io/projected/30665406-a35a-42a3-b979-45e64be7e47c-kube-api-access-cw7qs\") pod \"dnsmasq-dns-675f4bcbfc-h2p88\" (UID: \"30665406-a35a-42a3-b979-45e64be7e47c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.837791 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-config\") pod \"dnsmasq-dns-78dd6ddcc-8lb9m\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.837827 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8lb9m\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.837978 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn5hg\" (UniqueName: \"kubernetes.io/projected/6079ed9d-a8d5-43d9-955b-f165e96ac559-kube-api-access-sn5hg\") pod \"dnsmasq-dns-78dd6ddcc-8lb9m\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.838959 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-config\") pod \"dnsmasq-dns-78dd6ddcc-8lb9m\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.838974 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8lb9m\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.868555 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.873816 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn5hg\" (UniqueName: \"kubernetes.io/projected/6079ed9d-a8d5-43d9-955b-f165e96ac559-kube-api-access-sn5hg\") pod \"dnsmasq-dns-78dd6ddcc-8lb9m\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:08:58 crc kubenswrapper[4803]: I0127 22:08:58.916970 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:08:59 crc kubenswrapper[4803]: I0127 22:08:59.376656 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h2p88"] Jan 27 22:08:59 crc kubenswrapper[4803]: I0127 22:08:59.453073 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8lb9m"] Jan 27 22:09:00 crc kubenswrapper[4803]: I0127 22:09:00.085785 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" event={"ID":"30665406-a35a-42a3-b979-45e64be7e47c","Type":"ContainerStarted","Data":"91276125a71ea70d3eef07cc5d3f5203462357c3cf03f81c4d1ae3109bfac01c"} Jan 27 22:09:00 crc kubenswrapper[4803]: I0127 22:09:00.086836 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" event={"ID":"6079ed9d-a8d5-43d9-955b-f165e96ac559","Type":"ContainerStarted","Data":"2a3430a54a7f27c7b86e3e7ec809ac5609a0343ea81d3d28763a20dd966b5fb2"} Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.375674 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h2p88"] Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.403647 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q2v4v"] Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.405038 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.414371 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q2v4v"] Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.587428 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jl7b\" (UniqueName: \"kubernetes.io/projected/d2331ee6-b42a-43ef-b314-ab0084130872-kube-api-access-2jl7b\") pod \"dnsmasq-dns-666b6646f7-q2v4v\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.587495 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-config\") pod \"dnsmasq-dns-666b6646f7-q2v4v\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.587527 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q2v4v\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.661661 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8lb9m"] Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.684542 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8jgrp"] Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.685886 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.689488 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jl7b\" (UniqueName: \"kubernetes.io/projected/d2331ee6-b42a-43ef-b314-ab0084130872-kube-api-access-2jl7b\") pod \"dnsmasq-dns-666b6646f7-q2v4v\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.689545 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-config\") pod \"dnsmasq-dns-666b6646f7-q2v4v\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.689590 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q2v4v\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.690423 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q2v4v\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.691129 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-config\") pod \"dnsmasq-dns-666b6646f7-q2v4v\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.704297 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8jgrp"] Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.727594 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jl7b\" (UniqueName: \"kubernetes.io/projected/d2331ee6-b42a-43ef-b314-ab0084130872-kube-api-access-2jl7b\") pod \"dnsmasq-dns-666b6646f7-q2v4v\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.736404 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.791547 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8jgrp\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.791588 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-config\") pod \"dnsmasq-dns-57d769cc4f-8jgrp\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.791626 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6wsq\" (UniqueName: \"kubernetes.io/projected/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-kube-api-access-x6wsq\") pod \"dnsmasq-dns-57d769cc4f-8jgrp\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.894212 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8jgrp\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.894535 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-config\") pod \"dnsmasq-dns-57d769cc4f-8jgrp\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.894570 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6wsq\" (UniqueName: \"kubernetes.io/projected/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-kube-api-access-x6wsq\") pod \"dnsmasq-dns-57d769cc4f-8jgrp\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.896113 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8jgrp\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.896652 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-config\") pod \"dnsmasq-dns-57d769cc4f-8jgrp\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:01 crc kubenswrapper[4803]: I0127 22:09:01.928600 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6wsq\" (UniqueName: \"kubernetes.io/projected/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-kube-api-access-x6wsq\") pod \"dnsmasq-dns-57d769cc4f-8jgrp\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.002520 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.278770 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q2v4v"] Jan 27 22:09:02 crc kubenswrapper[4803]: W0127 22:09:02.282563 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2331ee6_b42a_43ef_b314_ab0084130872.slice/crio-517183d08ed6a7369cdbbb32d2fe615678d483897588d6f15c7a3a6b86d481a7 WatchSource:0}: Error finding container 517183d08ed6a7369cdbbb32d2fe615678d483897588d6f15c7a3a6b86d481a7: Status 404 returned error can't find the container with id 517183d08ed6a7369cdbbb32d2fe615678d483897588d6f15c7a3a6b86d481a7 Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.513034 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.514949 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.531557 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.531764 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7vpn6" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.531891 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.532034 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.533585 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.533724 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.533904 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.537597 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.544389 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8jgrp"] Jan 27 22:09:02 crc kubenswrapper[4803]: W0127 22:09:02.561771 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a75dbc6_2f5d_47c1_96f4_4af86d4ead23.slice/crio-d3dd44eaf8b578110e54e7143d86b1b1703b052a985005e85f86c54baa941d81 WatchSource:0}: Error finding container d3dd44eaf8b578110e54e7143d86b1b1703b052a985005e85f86c54baa941d81: Status 404 returned error can't find the container with id d3dd44eaf8b578110e54e7143d86b1b1703b052a985005e85f86c54baa941d81 Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.567541 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.569789 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.586126 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.588467 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.602833 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.609327 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/254b4a13-ff42-41cb-ae18-373ad9cfc583-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.609372 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.609395 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.609423 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.609459 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.609513 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.609827 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.609895 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w75ct\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-kube-api-access-w75ct\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.609929 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-server-conf\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.610033 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-config-data\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.610219 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/254b4a13-ff42-41cb-ae18-373ad9cfc583-pod-info\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.621153 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712266 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/993ad889-77c3-480e-8b5b-985766d488be-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712329 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ttr\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-kube-api-access-22ttr\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712354 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712377 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712430 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712478 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712507 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712535 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50e2e860-a414-4c3e-888e-ac5873f13d2d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712577 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712619 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50e2e860-a414-4c3e-888e-ac5873f13d2d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712650 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712689 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712735 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712755 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712771 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712790 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2szl\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-kube-api-access-t2szl\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712809 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w75ct\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-kube-api-access-w75ct\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712834 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-server-conf\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712874 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-config-data\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712902 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-config-data\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712929 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712933 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712949 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712967 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-server-conf\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.712972 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.713278 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/993ad889-77c3-480e-8b5b-985766d488be-pod-info\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.713317 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.713371 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.713545 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/254b4a13-ff42-41cb-ae18-373ad9cfc583-pod-info\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.713567 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.713585 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.713616 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.713631 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.714124 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.714508 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-server-conf\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.714632 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-config-data\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.714728 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/254b4a13-ff42-41cb-ae18-373ad9cfc583-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.715394 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-config-data\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.719230 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/254b4a13-ff42-41cb-ae18-373ad9cfc583-pod-info\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.723229 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.723925 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.723975 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a4e240b2c50a8a372898adfdb57e49d491cca8373a1fb16c49708c9e8f1afc73/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.724141 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/254b4a13-ff42-41cb-ae18-373ad9cfc583-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.725386 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.739785 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w75ct\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-kube-api-access-w75ct\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.762649 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") pod \"rabbitmq-server-0\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819298 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819357 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819378 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819395 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2szl\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-kube-api-access-t2szl\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819428 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-config-data\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819456 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819473 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819488 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-server-conf\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819505 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/993ad889-77c3-480e-8b5b-985766d488be-pod-info\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819522 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819546 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819582 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819601 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819617 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819634 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819653 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-config-data\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819681 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/993ad889-77c3-480e-8b5b-985766d488be-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819713 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ttr\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-kube-api-access-22ttr\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819752 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819774 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50e2e860-a414-4c3e-888e-ac5873f13d2d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819796 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.819816 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50e2e860-a414-4c3e-888e-ac5873f13d2d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.820674 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.820731 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.820981 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.821809 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-config-data\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.822076 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-server-conf\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.822363 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.827493 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.828780 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.829300 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.830244 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-config-data\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.836034 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.840598 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.840673 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.870974 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50e2e860-a414-4c3e-888e-ac5873f13d2d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.872677 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50e2e860-a414-4c3e-888e-ac5873f13d2d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.873984 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.875330 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.875499 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1dd4a31266194ed34fd80142b7bb117a8dffded2c221ac334a264cd95330634/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.876936 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.877033 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4350b2c16e320f8700639adfba841d8f1a9d9743f1e242da10e10d34d90f7352/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.877533 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/993ad889-77c3-480e-8b5b-985766d488be-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.877982 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.878456 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.879839 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2szl\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-kube-api-access-t2szl\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.884047 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/993ad889-77c3-480e-8b5b-985766d488be-pod-info\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.922964 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.942230 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.942486 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.942874 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.943113 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.943809 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.945175 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ttr\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-kube-api-access-22ttr\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.946120 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 22:09:02 crc kubenswrapper[4803]: I0127 22:09:02.946543 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-p74n6" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.035992 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.067484 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.067706 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.068263 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.068305 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.068504 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.068820 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.069034 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.069257 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.069475 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.069494 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.069676 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk6n7\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-kube-api-access-vk6n7\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.071939 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\") pod \"rabbitmq-server-1\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " pod="openstack/rabbitmq-server-1" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.076406 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\") pod \"rabbitmq-server-2\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " pod="openstack/rabbitmq-server-2" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.142683 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" event={"ID":"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23","Type":"ContainerStarted","Data":"d3dd44eaf8b578110e54e7143d86b1b1703b052a985005e85f86c54baa941d81"} Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.145309 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" event={"ID":"d2331ee6-b42a-43ef-b314-ab0084130872","Type":"ContainerStarted","Data":"517183d08ed6a7369cdbbb32d2fe615678d483897588d6f15c7a3a6b86d481a7"} Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.171250 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.171302 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.171395 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.171430 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.171473 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.171503 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.171547 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.171588 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.171640 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.171670 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.171707 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk6n7\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-kube-api-access-vk6n7\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.172542 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.172924 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.172934 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.173142 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.173816 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.181544 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.183742 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.186566 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.189330 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk6n7\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-kube-api-access-vk6n7\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.189991 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.190020 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/74c5fee18023779828160eb9f7d80ed70241abf770f5ddc3a17e57a288e11748/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.202470 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.211515 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.239826 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.376764 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\") pod \"rabbitmq-cell1-server-0\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.571592 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.671898 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.873906 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.957430 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.959020 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.965617 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6lw65" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.965905 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.965992 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.966114 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.966378 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 22:09:03 crc kubenswrapper[4803]: I0127 22:09:03.976169 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.012717 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 22:09:04 crc kubenswrapper[4803]: W0127 22:09:04.014938 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50e2e860_a414_4c3e_888e_ac5873f13d2d.slice/crio-322ea0c137cc48a65e3b48eff23bea9203168f789fa9365953812a72aae7be22 WatchSource:0}: Error finding container 322ea0c137cc48a65e3b48eff23bea9203168f789fa9365953812a72aae7be22: Status 404 returned error can't find the container with id 322ea0c137cc48a65e3b48eff23bea9203168f789fa9365953812a72aae7be22 Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.018253 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c78b382-5735-4741-b087-cefda68053f4-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.018297 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c78b382-5735-4741-b087-cefda68053f4-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.018367 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c78b382-5735-4741-b087-cefda68053f4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.018391 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0719b23b-2d4a-4f7a-9219-162b8a48ac2f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0719b23b-2d4a-4f7a-9219-162b8a48ac2f\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.018461 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c78b382-5735-4741-b087-cefda68053f4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.018502 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c78b382-5735-4741-b087-cefda68053f4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.018692 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c78b382-5735-4741-b087-cefda68053f4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.018763 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrqxw\" (UniqueName: \"kubernetes.io/projected/6c78b382-5735-4741-b087-cefda68053f4-kube-api-access-wrqxw\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.121300 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c78b382-5735-4741-b087-cefda68053f4-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.121346 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c78b382-5735-4741-b087-cefda68053f4-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.121387 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c78b382-5735-4741-b087-cefda68053f4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.121440 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0719b23b-2d4a-4f7a-9219-162b8a48ac2f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0719b23b-2d4a-4f7a-9219-162b8a48ac2f\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.121491 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c78b382-5735-4741-b087-cefda68053f4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.121524 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c78b382-5735-4741-b087-cefda68053f4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.121608 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c78b382-5735-4741-b087-cefda68053f4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.121641 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrqxw\" (UniqueName: \"kubernetes.io/projected/6c78b382-5735-4741-b087-cefda68053f4-kube-api-access-wrqxw\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.123754 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c78b382-5735-4741-b087-cefda68053f4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.124027 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c78b382-5735-4741-b087-cefda68053f4-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.124262 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c78b382-5735-4741-b087-cefda68053f4-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.125460 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c78b382-5735-4741-b087-cefda68053f4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.126258 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.126281 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0719b23b-2d4a-4f7a-9219-162b8a48ac2f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0719b23b-2d4a-4f7a-9219-162b8a48ac2f\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/027a27b641fb7368c7689740923d79f5c6a055c8afb822909df54d9518dea3ef/globalmount\"" pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.138424 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c78b382-5735-4741-b087-cefda68053f4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.157273 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrqxw\" (UniqueName: \"kubernetes.io/projected/6c78b382-5735-4741-b087-cefda68053f4-kube-api-access-wrqxw\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.163972 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0719b23b-2d4a-4f7a-9219-162b8a48ac2f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0719b23b-2d4a-4f7a-9219-162b8a48ac2f\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.164524 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c78b382-5735-4741-b087-cefda68053f4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c78b382-5735-4741-b087-cefda68053f4\") " pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.184214 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"50e2e860-a414-4c3e-888e-ac5873f13d2d","Type":"ContainerStarted","Data":"322ea0c137cc48a65e3b48eff23bea9203168f789fa9365953812a72aae7be22"} Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.187335 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"254b4a13-ff42-41cb-ae18-373ad9cfc583","Type":"ContainerStarted","Data":"a57ab5a747bb434ba331cf5f68873fd57ca318b6c2bb40fb6da46558fef0f2b8"} Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.189476 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"993ad889-77c3-480e-8b5b-985766d488be","Type":"ContainerStarted","Data":"5b63f1e6abb9bfb560d3c31928ccb4aae967fa40cbbe40a0f07963acdb9761d6"} Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.193165 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.280207 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 22:09:04 crc kubenswrapper[4803]: I0127 22:09:04.972358 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.230509 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73021b6c-3762-44f7-af8d-efd3ff4e4b7b","Type":"ContainerStarted","Data":"6883c57f26c240ec55c24c4b1482462402722981a2e0e323cb0c53a93d307b46"} Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.488270 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.489858 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.495093 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.496768 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.497940 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6gjck" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.502554 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.506793 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.561950 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4493a984-e728-410f-9362-0795391f2793-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.562189 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4493a984-e728-410f-9362-0795391f2793-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.562266 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4493a984-e728-410f-9362-0795391f2793-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.562336 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4493a984-e728-410f-9362-0795391f2793-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.562375 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpzfn\" (UniqueName: \"kubernetes.io/projected/4493a984-e728-410f-9362-0795391f2793-kube-api-access-rpzfn\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.562450 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4493a984-e728-410f-9362-0795391f2793-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.562602 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-52a9de89-9bb6-4afb-8c92-62eea3858f46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52a9de89-9bb6-4afb-8c92-62eea3858f46\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.562972 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4493a984-e728-410f-9362-0795391f2793-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.665910 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4493a984-e728-410f-9362-0795391f2793-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.665979 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-52a9de89-9bb6-4afb-8c92-62eea3858f46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52a9de89-9bb6-4afb-8c92-62eea3858f46\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.666031 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4493a984-e728-410f-9362-0795391f2793-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.666095 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4493a984-e728-410f-9362-0795391f2793-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.666185 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4493a984-e728-410f-9362-0795391f2793-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.666207 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4493a984-e728-410f-9362-0795391f2793-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.666243 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4493a984-e728-410f-9362-0795391f2793-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.666262 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpzfn\" (UniqueName: \"kubernetes.io/projected/4493a984-e728-410f-9362-0795391f2793-kube-api-access-rpzfn\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.667613 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4493a984-e728-410f-9362-0795391f2793-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.669780 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4493a984-e728-410f-9362-0795391f2793-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.674108 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4493a984-e728-410f-9362-0795391f2793-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.679153 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4493a984-e728-410f-9362-0795391f2793-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.698861 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4493a984-e728-410f-9362-0795391f2793-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.698978 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4493a984-e728-410f-9362-0795391f2793-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.699104 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.699139 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-52a9de89-9bb6-4afb-8c92-62eea3858f46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52a9de89-9bb6-4afb-8c92-62eea3858f46\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/066596acb2433a8e33a2d8a6999cb054eebf29c5d751f23b09be0004c4d2e2d5/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.721650 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpzfn\" (UniqueName: \"kubernetes.io/projected/4493a984-e728-410f-9362-0795391f2793-kube-api-access-rpzfn\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.791640 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.792832 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.797711 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.797927 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.798061 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-r28kn" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.825641 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.826233 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-52a9de89-9bb6-4afb-8c92-62eea3858f46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52a9de89-9bb6-4afb-8c92-62eea3858f46\") pod \"openstack-cell1-galera-0\" (UID: \"4493a984-e728-410f-9362-0795391f2793\") " pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.875726 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16121bd0-7cdd-487b-a269-a2c6cfb35d76-config-data\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.875801 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/16121bd0-7cdd-487b-a269-a2c6cfb35d76-memcached-tls-certs\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.875941 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvm72\" (UniqueName: \"kubernetes.io/projected/16121bd0-7cdd-487b-a269-a2c6cfb35d76-kube-api-access-vvm72\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.875973 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16121bd0-7cdd-487b-a269-a2c6cfb35d76-combined-ca-bundle\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.876006 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16121bd0-7cdd-487b-a269-a2c6cfb35d76-kolla-config\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.977921 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvm72\" (UniqueName: \"kubernetes.io/projected/16121bd0-7cdd-487b-a269-a2c6cfb35d76-kube-api-access-vvm72\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.977977 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16121bd0-7cdd-487b-a269-a2c6cfb35d76-combined-ca-bundle\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.978001 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16121bd0-7cdd-487b-a269-a2c6cfb35d76-kolla-config\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.978081 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16121bd0-7cdd-487b-a269-a2c6cfb35d76-config-data\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.978116 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/16121bd0-7cdd-487b-a269-a2c6cfb35d76-memcached-tls-certs\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.979094 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16121bd0-7cdd-487b-a269-a2c6cfb35d76-kolla-config\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.980220 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16121bd0-7cdd-487b-a269-a2c6cfb35d76-config-data\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.983478 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/16121bd0-7cdd-487b-a269-a2c6cfb35d76-memcached-tls-certs\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:05 crc kubenswrapper[4803]: I0127 22:09:05.984112 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16121bd0-7cdd-487b-a269-a2c6cfb35d76-combined-ca-bundle\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:06 crc kubenswrapper[4803]: I0127 22:09:06.009339 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvm72\" (UniqueName: \"kubernetes.io/projected/16121bd0-7cdd-487b-a269-a2c6cfb35d76-kube-api-access-vvm72\") pod \"memcached-0\" (UID: \"16121bd0-7cdd-487b-a269-a2c6cfb35d76\") " pod="openstack/memcached-0" Jan 27 22:09:06 crc kubenswrapper[4803]: I0127 22:09:06.128375 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:06 crc kubenswrapper[4803]: I0127 22:09:06.128484 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 22:09:07 crc kubenswrapper[4803]: I0127 22:09:07.909008 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 22:09:07 crc kubenswrapper[4803]: I0127 22:09:07.910571 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 22:09:07 crc kubenswrapper[4803]: I0127 22:09:07.918076 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-r9hzr" Jan 27 22:09:07 crc kubenswrapper[4803]: I0127 22:09:07.932651 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.051351 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6wzb\" (UniqueName: \"kubernetes.io/projected/ad3f4a0a-feb7-457e-bb68-9e0a8e420568-kube-api-access-k6wzb\") pod \"kube-state-metrics-0\" (UID: \"ad3f4a0a-feb7-457e-bb68-9e0a8e420568\") " pod="openstack/kube-state-metrics-0" Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.153450 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6wzb\" (UniqueName: \"kubernetes.io/projected/ad3f4a0a-feb7-457e-bb68-9e0a8e420568-kube-api-access-k6wzb\") pod \"kube-state-metrics-0\" (UID: \"ad3f4a0a-feb7-457e-bb68-9e0a8e420568\") " pod="openstack/kube-state-metrics-0" Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.201087 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6wzb\" (UniqueName: \"kubernetes.io/projected/ad3f4a0a-feb7-457e-bb68-9e0a8e420568-kube-api-access-k6wzb\") pod \"kube-state-metrics-0\" (UID: \"ad3f4a0a-feb7-457e-bb68-9e0a8e420568\") " pod="openstack/kube-state-metrics-0" Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.235091 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.802754 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g"] Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.804326 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.807365 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.819646 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-zqsh5" Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.840896 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g"] Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.889837 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbfecf3-a077-4d96-b7d5-d81b1c744194-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-zj24g\" (UID: \"7dbfecf3-a077-4d96-b7d5-d81b1c744194\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.889913 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v9vx\" (UniqueName: \"kubernetes.io/projected/7dbfecf3-a077-4d96-b7d5-d81b1c744194-kube-api-access-5v9vx\") pod \"observability-ui-dashboards-66cbf594b5-zj24g\" (UID: \"7dbfecf3-a077-4d96-b7d5-d81b1c744194\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.992184 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbfecf3-a077-4d96-b7d5-d81b1c744194-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-zj24g\" (UID: \"7dbfecf3-a077-4d96-b7d5-d81b1c744194\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" Jan 27 22:09:08 crc kubenswrapper[4803]: I0127 22:09:08.992238 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v9vx\" (UniqueName: \"kubernetes.io/projected/7dbfecf3-a077-4d96-b7d5-d81b1c744194-kube-api-access-5v9vx\") pod \"observability-ui-dashboards-66cbf594b5-zj24g\" (UID: \"7dbfecf3-a077-4d96-b7d5-d81b1c744194\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" Jan 27 22:09:08 crc kubenswrapper[4803]: E0127 22:09:08.992587 4803 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 27 22:09:08 crc kubenswrapper[4803]: E0127 22:09:08.992628 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dbfecf3-a077-4d96-b7d5-d81b1c744194-serving-cert podName:7dbfecf3-a077-4d96-b7d5-d81b1c744194 nodeName:}" failed. No retries permitted until 2026-01-27 22:09:09.492614986 +0000 UTC m=+1301.908636685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7dbfecf3-a077-4d96-b7d5-d81b1c744194-serving-cert") pod "observability-ui-dashboards-66cbf594b5-zj24g" (UID: "7dbfecf3-a077-4d96-b7d5-d81b1c744194") : secret "observability-ui-dashboards" not found Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.012023 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v9vx\" (UniqueName: \"kubernetes.io/projected/7dbfecf3-a077-4d96-b7d5-d81b1c744194-kube-api-access-5v9vx\") pod \"observability-ui-dashboards-66cbf594b5-zj24g\" (UID: \"7dbfecf3-a077-4d96-b7d5-d81b1c744194\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.140270 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.145428 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.152107 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.152413 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.152528 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.152679 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.152784 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nxgns" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.160073 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.155702 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.164043 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.182840 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.272345 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-98b9df85f-f5gmm"] Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.274694 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.293945 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-98b9df85f-f5gmm"] Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.313597 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.313679 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.313720 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnfcj\" (UniqueName: \"kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-kube-api-access-cnfcj\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.313747 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.313797 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.313838 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.313888 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.313918 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.313964 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.314007 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.419956 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420263 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420412 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420451 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420488 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420559 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h42mt\" (UniqueName: \"kubernetes.io/projected/fa470512-29ae-4707-abdb-a93dd93f6b58-kube-api-access-h42mt\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420597 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420620 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-service-ca\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420689 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa470512-29ae-4707-abdb-a93dd93f6b58-console-serving-cert\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420728 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-trusted-ca-bundle\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420753 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420796 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-oauth-serving-cert\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420838 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa470512-29ae-4707-abdb-a93dd93f6b58-console-oauth-config\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.420983 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.421105 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.421155 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-console-config\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.421195 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnfcj\" (UniqueName: \"kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-kube-api-access-cnfcj\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.421230 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.423724 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.424240 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.438996 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.439016 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.439170 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.454756 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.458379 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.471223 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnfcj\" (UniqueName: \"kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-kube-api-access-cnfcj\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.524040 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbfecf3-a077-4d96-b7d5-d81b1c744194-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-zj24g\" (UID: \"7dbfecf3-a077-4d96-b7d5-d81b1c744194\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.524129 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h42mt\" (UniqueName: \"kubernetes.io/projected/fa470512-29ae-4707-abdb-a93dd93f6b58-kube-api-access-h42mt\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.524148 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-service-ca\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.524183 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa470512-29ae-4707-abdb-a93dd93f6b58-console-serving-cert\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.524202 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-trusted-ca-bundle\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.524226 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-oauth-serving-cert\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.524247 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa470512-29ae-4707-abdb-a93dd93f6b58-console-oauth-config\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.524327 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-console-config\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.525231 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-console-config\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.529869 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-oauth-serving-cert\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.530132 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbfecf3-a077-4d96-b7d5-d81b1c744194-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-zj24g\" (UID: \"7dbfecf3-a077-4d96-b7d5-d81b1c744194\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.530837 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-trusted-ca-bundle\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.531830 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa470512-29ae-4707-abdb-a93dd93f6b58-service-ca\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.536765 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa470512-29ae-4707-abdb-a93dd93f6b58-console-oauth-config\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.537230 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa470512-29ae-4707-abdb-a93dd93f6b58-console-serving-cert\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.540530 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.547144 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/08c67f674327cf14c0159546d65f5dd7b019eaac71000ad86f5fa5ecad0cfcfa/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.548248 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h42mt\" (UniqueName: \"kubernetes.io/projected/fa470512-29ae-4707-abdb-a93dd93f6b58-kube-api-access-h42mt\") pod \"console-98b9df85f-f5gmm\" (UID: \"fa470512-29ae-4707-abdb-a93dd93f6b58\") " pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.601557 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.644903 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\") pod \"prometheus-metric-storage-0\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.741779 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" Jan 27 22:09:09 crc kubenswrapper[4803]: I0127 22:09:09.792016 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.535373 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xfps2"] Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.536804 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.539267 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.539562 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-xfb9m" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.539703 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.550358 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xfps2"] Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.593926 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5ch2x"] Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.595960 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.603136 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5ch2x"] Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.680880 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfdxr\" (UniqueName: \"kubernetes.io/projected/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-kube-api-access-tfdxr\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.680954 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-scripts\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.681056 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-combined-ca-bundle\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.681103 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9s9p\" (UniqueName: \"kubernetes.io/projected/302d32b5-3246-4bbc-877e-700ecd30afbd-kube-api-access-d9s9p\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.681134 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-var-lib\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.681169 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-ovn-controller-tls-certs\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.681205 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-var-run\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.681232 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-var-run-ovn\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.681249 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-var-run\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.681267 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-var-log\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.681284 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/302d32b5-3246-4bbc-877e-700ecd30afbd-scripts\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.681347 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-etc-ovs\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.681363 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-var-log-ovn\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.783389 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-etc-ovs\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.783432 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-var-log-ovn\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.783519 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfdxr\" (UniqueName: \"kubernetes.io/projected/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-kube-api-access-tfdxr\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.783559 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-scripts\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.784190 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-etc-ovs\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.784335 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-var-log-ovn\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.786358 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-scripts\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.783961 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-combined-ca-bundle\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.786467 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9s9p\" (UniqueName: \"kubernetes.io/projected/302d32b5-3246-4bbc-877e-700ecd30afbd-kube-api-access-d9s9p\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.786494 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-var-lib\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.786765 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-ovn-controller-tls-certs\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.786784 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-var-lib\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.787298 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-var-run\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.787497 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-var-run\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.787595 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-var-run\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.787622 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-var-run-ovn\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.787653 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-var-log\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.787676 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/302d32b5-3246-4bbc-877e-700ecd30afbd-scripts\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.789010 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-var-run-ovn\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.789052 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-var-log\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.789089 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/302d32b5-3246-4bbc-877e-700ecd30afbd-var-run\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.789501 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-ovn-controller-tls-certs\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.790568 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/302d32b5-3246-4bbc-877e-700ecd30afbd-scripts\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.799688 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-combined-ca-bundle\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.803544 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfdxr\" (UniqueName: \"kubernetes.io/projected/3f1dc5cb-1275-4cf9-8c71-f9575161f73f-kube-api-access-tfdxr\") pod \"ovn-controller-xfps2\" (UID: \"3f1dc5cb-1275-4cf9-8c71-f9575161f73f\") " pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.804120 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9s9p\" (UniqueName: \"kubernetes.io/projected/302d32b5-3246-4bbc-877e-700ecd30afbd-kube-api-access-d9s9p\") pod \"ovn-controller-ovs-5ch2x\" (UID: \"302d32b5-3246-4bbc-877e-700ecd30afbd\") " pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.861617 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2" Jan 27 22:09:11 crc kubenswrapper[4803]: I0127 22:09:11.919328 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:13 crc kubenswrapper[4803]: I0127 22:09:13.312562 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c78b382-5735-4741-b087-cefda68053f4","Type":"ContainerStarted","Data":"f2e2d7ce5959bb4036998a9e86f41e1a9c39e4d02384a2da994a3a8539e7cfbf"} Jan 27 22:09:13 crc kubenswrapper[4803]: I0127 22:09:13.917296 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 22:09:13 crc kubenswrapper[4803]: I0127 22:09:13.919237 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:13 crc kubenswrapper[4803]: I0127 22:09:13.924159 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 22:09:13 crc kubenswrapper[4803]: I0127 22:09:13.924377 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 22:09:13 crc kubenswrapper[4803]: I0127 22:09:13.924616 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 22:09:13 crc kubenswrapper[4803]: I0127 22:09:13.928074 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-hslq6" Jan 27 22:09:13 crc kubenswrapper[4803]: I0127 22:09:13.931278 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 22:09:13 crc kubenswrapper[4803]: I0127 22:09:13.954739 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.032154 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.032206 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzd2s\" (UniqueName: \"kubernetes.io/projected/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-kube-api-access-nzd2s\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.032233 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-config\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.032299 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.032320 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.032361 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.032384 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.032456 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0a10108-36a6-4e5f-bb4d-274b0d24e0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0a10108-36a6-4e5f-bb4d-274b0d24e0fa\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.133937 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.133991 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzd2s\" (UniqueName: \"kubernetes.io/projected/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-kube-api-access-nzd2s\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.134029 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-config\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.134053 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.134109 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.134144 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.134187 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.134223 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a0a10108-36a6-4e5f-bb4d-274b0d24e0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0a10108-36a6-4e5f-bb4d-274b0d24e0fa\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.135119 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.136393 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-config\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.137220 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.138541 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.138599 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a0a10108-36a6-4e5f-bb4d-274b0d24e0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0a10108-36a6-4e5f-bb4d-274b0d24e0fa\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8909d6763c56cfe3bc5a396c2ded46a549f96be727fe96e327fddc74e2e81f27/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.144771 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.145627 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.151384 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.153700 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzd2s\" (UniqueName: \"kubernetes.io/projected/8a269bc9-9bdc-4d66-b435-2ec777b4bdcd-kube-api-access-nzd2s\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.200574 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a0a10108-36a6-4e5f-bb4d-274b0d24e0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0a10108-36a6-4e5f-bb4d-274b0d24e0fa\") pod \"ovsdbserver-sb-0\" (UID: \"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd\") " pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.257019 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.815557 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.817763 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.821383 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.821571 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.821705 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-skfg5" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.824270 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.825882 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.853366 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cb0e5b16-8baa-435a-bae9-7b09e5602b43-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.853450 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0e5b16-8baa-435a-bae9-7b09e5602b43-config\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.853502 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0e5b16-8baa-435a-bae9-7b09e5602b43-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.853918 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcbdt\" (UniqueName: \"kubernetes.io/projected/cb0e5b16-8baa-435a-bae9-7b09e5602b43-kube-api-access-pcbdt\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.854034 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39d909a5-f0fb-47b6-8ee9-161e872bff98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d909a5-f0fb-47b6-8ee9-161e872bff98\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.854075 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb0e5b16-8baa-435a-bae9-7b09e5602b43-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.854261 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0e5b16-8baa-435a-bae9-7b09e5602b43-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.854317 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0e5b16-8baa-435a-bae9-7b09e5602b43-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.956588 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcbdt\" (UniqueName: \"kubernetes.io/projected/cb0e5b16-8baa-435a-bae9-7b09e5602b43-kube-api-access-pcbdt\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.956648 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39d909a5-f0fb-47b6-8ee9-161e872bff98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d909a5-f0fb-47b6-8ee9-161e872bff98\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.956673 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb0e5b16-8baa-435a-bae9-7b09e5602b43-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.956721 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0e5b16-8baa-435a-bae9-7b09e5602b43-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.956737 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0e5b16-8baa-435a-bae9-7b09e5602b43-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.956758 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cb0e5b16-8baa-435a-bae9-7b09e5602b43-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.956796 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0e5b16-8baa-435a-bae9-7b09e5602b43-config\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.956825 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0e5b16-8baa-435a-bae9-7b09e5602b43-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.957241 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cb0e5b16-8baa-435a-bae9-7b09e5602b43-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.958072 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0e5b16-8baa-435a-bae9-7b09e5602b43-config\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.958206 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb0e5b16-8baa-435a-bae9-7b09e5602b43-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.958469 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.958516 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39d909a5-f0fb-47b6-8ee9-161e872bff98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d909a5-f0fb-47b6-8ee9-161e872bff98\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fcfd3520ee59be17d26ddfb887837a304978b04dfe803e3b82d2f42404984073/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.962663 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0e5b16-8baa-435a-bae9-7b09e5602b43-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.973176 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcbdt\" (UniqueName: \"kubernetes.io/projected/cb0e5b16-8baa-435a-bae9-7b09e5602b43-kube-api-access-pcbdt\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.974588 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0e5b16-8baa-435a-bae9-7b09e5602b43-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:14 crc kubenswrapper[4803]: I0127 22:09:14.975273 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0e5b16-8baa-435a-bae9-7b09e5602b43-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:15 crc kubenswrapper[4803]: I0127 22:09:15.001245 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39d909a5-f0fb-47b6-8ee9-161e872bff98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d909a5-f0fb-47b6-8ee9-161e872bff98\") pod \"ovsdbserver-nb-0\" (UID: \"cb0e5b16-8baa-435a-bae9-7b09e5602b43\") " pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:15 crc kubenswrapper[4803]: I0127 22:09:15.145861 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:16 crc kubenswrapper[4803]: I0127 22:09:16.344437 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:09:16 crc kubenswrapper[4803]: I0127 22:09:16.344493 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:09:16 crc kubenswrapper[4803]: I0127 22:09:16.344535 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:09:16 crc kubenswrapper[4803]: I0127 22:09:16.345293 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44535dae9f522c885b28c5811071a2781a43938af387dee7b52c5fee20b7bdeb"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:09:16 crc kubenswrapper[4803]: I0127 22:09:16.345350 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://44535dae9f522c885b28c5811071a2781a43938af387dee7b52c5fee20b7bdeb" gracePeriod=600 Jan 27 22:09:17 crc kubenswrapper[4803]: I0127 22:09:17.361188 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="44535dae9f522c885b28c5811071a2781a43938af387dee7b52c5fee20b7bdeb" exitCode=0 Jan 27 22:09:17 crc kubenswrapper[4803]: I0127 22:09:17.361228 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"44535dae9f522c885b28c5811071a2781a43938af387dee7b52c5fee20b7bdeb"} Jan 27 22:09:17 crc kubenswrapper[4803]: I0127 22:09:17.361524 4803 scope.go:117] "RemoveContainer" containerID="a5bed6f52f57219858cf339986b99dcfe79ad6cdcbe8912b0cb981f2d60d0415" Jan 27 22:09:24 crc kubenswrapper[4803]: E0127 22:09:24.323711 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 27 22:09:24 crc kubenswrapper[4803]: E0127 22:09:24.324274 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2szl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(50e2e860-a414-4c3e-888e-ac5873f13d2d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:09:24 crc kubenswrapper[4803]: E0127 22:09:24.325449 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="50e2e860-a414-4c3e-888e-ac5873f13d2d" Jan 27 22:09:24 crc kubenswrapper[4803]: I0127 22:09:24.793625 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 22:09:25 crc kubenswrapper[4803]: E0127 22:09:25.252831 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 22:09:25 crc kubenswrapper[4803]: E0127 22:09:25.253018 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cw7qs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-h2p88_openstack(30665406-a35a-42a3-b979-45e64be7e47c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:09:25 crc kubenswrapper[4803]: E0127 22:09:25.254419 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" podUID="30665406-a35a-42a3-b979-45e64be7e47c" Jan 27 22:09:25 crc kubenswrapper[4803]: E0127 22:09:25.289127 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 22:09:25 crc kubenswrapper[4803]: E0127 22:09:25.289368 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6wsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-8jgrp_openstack(6a75dbc6-2f5d-47c1-96f4-4af86d4ead23): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:09:25 crc kubenswrapper[4803]: E0127 22:09:25.291265 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" podUID="6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" Jan 27 22:09:25 crc kubenswrapper[4803]: E0127 22:09:25.298807 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 22:09:25 crc kubenswrapper[4803]: E0127 22:09:25.298984 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sn5hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-8lb9m_openstack(6079ed9d-a8d5-43d9-955b-f165e96ac559): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:09:25 crc kubenswrapper[4803]: E0127 22:09:25.300168 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" podUID="6079ed9d-a8d5-43d9-955b-f165e96ac559" Jan 27 22:09:25 crc kubenswrapper[4803]: E0127 22:09:25.443731 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" podUID="6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.747959 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.755655 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.865594 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30665406-a35a-42a3-b979-45e64be7e47c-config\") pod \"30665406-a35a-42a3-b979-45e64be7e47c\" (UID: \"30665406-a35a-42a3-b979-45e64be7e47c\") " Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.865661 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-config\") pod \"6079ed9d-a8d5-43d9-955b-f165e96ac559\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.865697 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw7qs\" (UniqueName: \"kubernetes.io/projected/30665406-a35a-42a3-b979-45e64be7e47c-kube-api-access-cw7qs\") pod \"30665406-a35a-42a3-b979-45e64be7e47c\" (UID: \"30665406-a35a-42a3-b979-45e64be7e47c\") " Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.865800 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-dns-svc\") pod \"6079ed9d-a8d5-43d9-955b-f165e96ac559\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.865866 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn5hg\" (UniqueName: \"kubernetes.io/projected/6079ed9d-a8d5-43d9-955b-f165e96ac559-kube-api-access-sn5hg\") pod \"6079ed9d-a8d5-43d9-955b-f165e96ac559\" (UID: \"6079ed9d-a8d5-43d9-955b-f165e96ac559\") " Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.866176 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30665406-a35a-42a3-b979-45e64be7e47c-config" (OuterVolumeSpecName: "config") pod "30665406-a35a-42a3-b979-45e64be7e47c" (UID: "30665406-a35a-42a3-b979-45e64be7e47c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.866554 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30665406-a35a-42a3-b979-45e64be7e47c-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.866182 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-config" (OuterVolumeSpecName: "config") pod "6079ed9d-a8d5-43d9-955b-f165e96ac559" (UID: "6079ed9d-a8d5-43d9-955b-f165e96ac559"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.866984 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6079ed9d-a8d5-43d9-955b-f165e96ac559" (UID: "6079ed9d-a8d5-43d9-955b-f165e96ac559"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.880314 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6079ed9d-a8d5-43d9-955b-f165e96ac559-kube-api-access-sn5hg" (OuterVolumeSpecName: "kube-api-access-sn5hg") pod "6079ed9d-a8d5-43d9-955b-f165e96ac559" (UID: "6079ed9d-a8d5-43d9-955b-f165e96ac559"). InnerVolumeSpecName "kube-api-access-sn5hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.883097 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30665406-a35a-42a3-b979-45e64be7e47c-kube-api-access-cw7qs" (OuterVolumeSpecName: "kube-api-access-cw7qs") pod "30665406-a35a-42a3-b979-45e64be7e47c" (UID: "30665406-a35a-42a3-b979-45e64be7e47c"). InnerVolumeSpecName "kube-api-access-cw7qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.969543 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.969924 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn5hg\" (UniqueName: \"kubernetes.io/projected/6079ed9d-a8d5-43d9-955b-f165e96ac559-kube-api-access-sn5hg\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.969939 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6079ed9d-a8d5-43d9-955b-f165e96ac559-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:27 crc kubenswrapper[4803]: I0127 22:09:27.969952 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw7qs\" (UniqueName: \"kubernetes.io/projected/30665406-a35a-42a3-b979-45e64be7e47c-kube-api-access-cw7qs\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.034655 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xfps2"] Jan 27 22:09:28 crc kubenswrapper[4803]: W0127 22:09:28.092758 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f1dc5cb_1275_4cf9_8c71_f9575161f73f.slice/crio-1b4a37784ff2cee1465fd3d6b1b4a73afcb238a677186ef59e96ba15bb459635 WatchSource:0}: Error finding container 1b4a37784ff2cee1465fd3d6b1b4a73afcb238a677186ef59e96ba15bb459635: Status 404 returned error can't find the container with id 1b4a37784ff2cee1465fd3d6b1b4a73afcb238a677186ef59e96ba15bb459635 Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.491011 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c78b382-5735-4741-b087-cefda68053f4","Type":"ContainerStarted","Data":"e97490c8142eb00c38a293af52be996a5f4bcc870ee66e0fa0f6ff9293c22cff"} Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.495029 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c"} Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.505536 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"16121bd0-7cdd-487b-a269-a2c6cfb35d76","Type":"ContainerStarted","Data":"e628d1c7cb285fb1a17e7114adf95e1a76cbd382e7663f8dbfd6c945bda57580"} Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.508191 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" event={"ID":"30665406-a35a-42a3-b979-45e64be7e47c","Type":"ContainerDied","Data":"91276125a71ea70d3eef07cc5d3f5203462357c3cf03f81c4d1ae3109bfac01c"} Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.508305 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h2p88" Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.536710 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xfps2" event={"ID":"3f1dc5cb-1275-4cf9-8c71-f9575161f73f","Type":"ContainerStarted","Data":"1b4a37784ff2cee1465fd3d6b1b4a73afcb238a677186ef59e96ba15bb459635"} Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.542909 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" event={"ID":"6079ed9d-a8d5-43d9-955b-f165e96ac559","Type":"ContainerDied","Data":"2a3430a54a7f27c7b86e3e7ec809ac5609a0343ea81d3d28763a20dd966b5fb2"} Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.543039 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8lb9m" Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.547872 4803 generic.go:334] "Generic (PLEG): container finished" podID="d2331ee6-b42a-43ef-b314-ab0084130872" containerID="995b5e492294e6c71e0885c93f16289b7820bff6b9d1a06188083c3549b22660" exitCode=0 Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.547916 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" event={"ID":"d2331ee6-b42a-43ef-b314-ab0084130872","Type":"ContainerDied","Data":"995b5e492294e6c71e0885c93f16289b7820bff6b9d1a06188083c3549b22660"} Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.667136 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8lb9m"] Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.693485 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8lb9m"] Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.770427 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h2p88"] Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.787971 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h2p88"] Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.796383 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g"] Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.806555 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.818242 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 22:09:28 crc kubenswrapper[4803]: I0127 22:09:28.823145 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 22:09:28 crc kubenswrapper[4803]: W0127 22:09:28.892402 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dbfecf3_a077_4d96_b7d5_d81b1c744194.slice/crio-021adc390a6f6a62b765bc354369a894e7882798955ebc15363a3ebeb6c4afd9 WatchSource:0}: Error finding container 021adc390a6f6a62b765bc354369a894e7882798955ebc15363a3ebeb6c4afd9: Status 404 returned error can't find the container with id 021adc390a6f6a62b765bc354369a894e7882798955ebc15363a3ebeb6c4afd9 Jan 27 22:09:28 crc kubenswrapper[4803]: W0127 22:09:28.893758 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad3f4a0a_feb7_457e_bb68_9e0a8e420568.slice/crio-303c3d8771d355786b72d85004e6274ea027dc732235d85840bb05c61e8b9c5c WatchSource:0}: Error finding container 303c3d8771d355786b72d85004e6274ea027dc732235d85840bb05c61e8b9c5c: Status 404 returned error can't find the container with id 303c3d8771d355786b72d85004e6274ea027dc732235d85840bb05c61e8b9c5c Jan 27 22:09:28 crc kubenswrapper[4803]: W0127 22:09:28.905580 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod006465d9_12d6_4d2e_a02e_8a2669bdcbef.slice/crio-0af98fc78732b1fc9499d916735ae5173a426b0a5326fd849f3f6579db3a299c WatchSource:0}: Error finding container 0af98fc78732b1fc9499d916735ae5173a426b0a5326fd849f3f6579db3a299c: Status 404 returned error can't find the container with id 0af98fc78732b1fc9499d916735ae5173a426b0a5326fd849f3f6579db3a299c Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.013751 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-98b9df85f-f5gmm"] Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.022432 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.092584 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 22:09:29 crc kubenswrapper[4803]: W0127 22:09:29.191246 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa470512_29ae_4707_abdb_a93dd93f6b58.slice/crio-db87052a3cdefec709f0bdfc9f1135e24f2dbb334f7b4db87dfb7a170aacc060 WatchSource:0}: Error finding container db87052a3cdefec709f0bdfc9f1135e24f2dbb334f7b4db87dfb7a170aacc060: Status 404 returned error can't find the container with id db87052a3cdefec709f0bdfc9f1135e24f2dbb334f7b4db87dfb7a170aacc060 Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.221151 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5ch2x"] Jan 27 22:09:29 crc kubenswrapper[4803]: W0127 22:09:29.296355 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4493a984_e728_410f_9362_0795391f2793.slice/crio-2617454ec53d021a78d4c0d3de38b3327a57dce3027abac25168c770a88e8172 WatchSource:0}: Error finding container 2617454ec53d021a78d4c0d3de38b3327a57dce3027abac25168c770a88e8172: Status 404 returned error can't find the container with id 2617454ec53d021a78d4c0d3de38b3327a57dce3027abac25168c770a88e8172 Jan 27 22:09:29 crc kubenswrapper[4803]: W0127 22:09:29.297785 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb0e5b16_8baa_435a_bae9_7b09e5602b43.slice/crio-d6c4789b1db34b591c1138f466de20dcf43d1d9570d9b0eff9d539354a0ad5c4 WatchSource:0}: Error finding container d6c4789b1db34b591c1138f466de20dcf43d1d9570d9b0eff9d539354a0ad5c4: Status 404 returned error can't find the container with id d6c4789b1db34b591c1138f466de20dcf43d1d9570d9b0eff9d539354a0ad5c4 Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.557287 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5ch2x" event={"ID":"302d32b5-3246-4bbc-877e-700ecd30afbd","Type":"ContainerStarted","Data":"ff5f377e75cf2e424db98511ba81eb2c7eac09a0f69c63306750727dfeff9d88"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.565373 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4493a984-e728-410f-9362-0795391f2793","Type":"ContainerStarted","Data":"2617454ec53d021a78d4c0d3de38b3327a57dce3027abac25168c770a88e8172"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.568740 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-98b9df85f-f5gmm" event={"ID":"fa470512-29ae-4707-abdb-a93dd93f6b58","Type":"ContainerStarted","Data":"db87052a3cdefec709f0bdfc9f1135e24f2dbb334f7b4db87dfb7a170aacc060"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.572619 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73021b6c-3762-44f7-af8d-efd3ff4e4b7b","Type":"ContainerStarted","Data":"0b2c830dc721a2edad3fd418354a9a2e73aa5da7b6de027ce46a3e2b2064fa6b"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.576772 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cb0e5b16-8baa-435a-bae9-7b09e5602b43","Type":"ContainerStarted","Data":"d6c4789b1db34b591c1138f466de20dcf43d1d9570d9b0eff9d539354a0ad5c4"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.578470 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ad3f4a0a-feb7-457e-bb68-9e0a8e420568","Type":"ContainerStarted","Data":"303c3d8771d355786b72d85004e6274ea027dc732235d85840bb05c61e8b9c5c"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.584238 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"006465d9-12d6-4d2e-a02e-8a2669bdcbef","Type":"ContainerStarted","Data":"0af98fc78732b1fc9499d916735ae5173a426b0a5326fd849f3f6579db3a299c"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.586877 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"254b4a13-ff42-41cb-ae18-373ad9cfc583","Type":"ContainerStarted","Data":"ca8197e506a06cf62307479ac31e9ea0d6627d531e6aead1b3345820efde09db"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.590066 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"993ad889-77c3-480e-8b5b-985766d488be","Type":"ContainerStarted","Data":"c21b90b93949fe0dc88c565a42c81d7fafe84c23ccf407e2c619db232c66744d"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.600753 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd","Type":"ContainerStarted","Data":"41168b3f64c99802538ce21af88c8f6062d8721787d149772caa94372ecbfb80"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.603581 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" event={"ID":"7dbfecf3-a077-4d96-b7d5-d81b1c744194","Type":"ContainerStarted","Data":"021adc390a6f6a62b765bc354369a894e7882798955ebc15363a3ebeb6c4afd9"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.613865 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" event={"ID":"d2331ee6-b42a-43ef-b314-ab0084130872","Type":"ContainerStarted","Data":"44eebac9d3582f51e08e051e75ff94b98d8bc1c5d73dc3b58bdef79de72df67e"} Jan 27 22:09:29 crc kubenswrapper[4803]: I0127 22:09:29.701795 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" podStartSLOduration=3.2248902680000002 podStartE2EDuration="28.701776927s" podCreationTimestamp="2026-01-27 22:09:01 +0000 UTC" firstStartedPulling="2026-01-27 22:09:02.286096344 +0000 UTC m=+1294.702118043" lastFinishedPulling="2026-01-27 22:09:27.762983013 +0000 UTC m=+1320.179004702" observedRunningTime="2026-01-27 22:09:29.691384988 +0000 UTC m=+1322.107406697" watchObservedRunningTime="2026-01-27 22:09:29.701776927 +0000 UTC m=+1322.117798626" Jan 27 22:09:30 crc kubenswrapper[4803]: I0127 22:09:30.318688 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30665406-a35a-42a3-b979-45e64be7e47c" path="/var/lib/kubelet/pods/30665406-a35a-42a3-b979-45e64be7e47c/volumes" Jan 27 22:09:30 crc kubenswrapper[4803]: I0127 22:09:30.319872 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6079ed9d-a8d5-43d9-955b-f165e96ac559" path="/var/lib/kubelet/pods/6079ed9d-a8d5-43d9-955b-f165e96ac559/volumes" Jan 27 22:09:30 crc kubenswrapper[4803]: I0127 22:09:30.624911 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4493a984-e728-410f-9362-0795391f2793","Type":"ContainerStarted","Data":"89ac47fba174a3ad88bba2733e390c504a51d4bff54fc58299220dea8afa2b8a"} Jan 27 22:09:30 crc kubenswrapper[4803]: I0127 22:09:30.628432 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-98b9df85f-f5gmm" event={"ID":"fa470512-29ae-4707-abdb-a93dd93f6b58","Type":"ContainerStarted","Data":"a2a44aa47f06462db5296bc332114eb143798cd5cc78761f3d8ca741e57e2138"} Jan 27 22:09:30 crc kubenswrapper[4803]: I0127 22:09:30.633003 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"50e2e860-a414-4c3e-888e-ac5873f13d2d","Type":"ContainerStarted","Data":"c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd"} Jan 27 22:09:30 crc kubenswrapper[4803]: I0127 22:09:30.633765 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:30 crc kubenswrapper[4803]: I0127 22:09:30.671308 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-98b9df85f-f5gmm" podStartSLOduration=21.671283733 podStartE2EDuration="21.671283733s" podCreationTimestamp="2026-01-27 22:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:09:30.664630264 +0000 UTC m=+1323.080651973" watchObservedRunningTime="2026-01-27 22:09:30.671283733 +0000 UTC m=+1323.087305432" Jan 27 22:09:32 crc kubenswrapper[4803]: I0127 22:09:32.655295 4803 generic.go:334] "Generic (PLEG): container finished" podID="6c78b382-5735-4741-b087-cefda68053f4" containerID="e97490c8142eb00c38a293af52be996a5f4bcc870ee66e0fa0f6ff9293c22cff" exitCode=0 Jan 27 22:09:32 crc kubenswrapper[4803]: I0127 22:09:32.655367 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c78b382-5735-4741-b087-cefda68053f4","Type":"ContainerDied","Data":"e97490c8142eb00c38a293af52be996a5f4bcc870ee66e0fa0f6ff9293c22cff"} Jan 27 22:09:33 crc kubenswrapper[4803]: I0127 22:09:33.665145 4803 generic.go:334] "Generic (PLEG): container finished" podID="4493a984-e728-410f-9362-0795391f2793" containerID="89ac47fba174a3ad88bba2733e390c504a51d4bff54fc58299220dea8afa2b8a" exitCode=0 Jan 27 22:09:33 crc kubenswrapper[4803]: I0127 22:09:33.665235 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4493a984-e728-410f-9362-0795391f2793","Type":"ContainerDied","Data":"89ac47fba174a3ad88bba2733e390c504a51d4bff54fc58299220dea8afa2b8a"} Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.687763 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"16121bd0-7cdd-487b-a269-a2c6cfb35d76","Type":"ContainerStarted","Data":"6adbe01a7e9e86f0ef53795a5395fef38a17099cba02fae5da22360cfc705d20"} Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.689244 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.691911 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd","Type":"ContainerStarted","Data":"476484cc61d6f6a36896a8aa4c6311f637ef3890321d4b5876c84a0b9857b697"} Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.694808 4803 generic.go:334] "Generic (PLEG): container finished" podID="302d32b5-3246-4bbc-877e-700ecd30afbd" containerID="927ae9213209d4b6132952d364bc43bb18f11ba03d7c38dec48ab9906b4cd3d9" exitCode=0 Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.694862 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5ch2x" event={"ID":"302d32b5-3246-4bbc-877e-700ecd30afbd","Type":"ContainerDied","Data":"927ae9213209d4b6132952d364bc43bb18f11ba03d7c38dec48ab9906b4cd3d9"} Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.697292 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4493a984-e728-410f-9362-0795391f2793","Type":"ContainerStarted","Data":"b377002717e410ad179d88d9b643c5b6f14ddaabc67985dc331b619f08ea2116"} Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.701207 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cb0e5b16-8baa-435a-bae9-7b09e5602b43","Type":"ContainerStarted","Data":"a196a44432f7224dc1490806b78e4ed6f00ce973908a76ce37925bc278469a7c"} Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.708172 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ad3f4a0a-feb7-457e-bb68-9e0a8e420568","Type":"ContainerStarted","Data":"ff642124702bafef96d2171fb5b9d348c6ca8d70c0861bd1fd2117036e39846d"} Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.710014 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.714962 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xfps2" event={"ID":"3f1dc5cb-1275-4cf9-8c71-f9575161f73f","Type":"ContainerStarted","Data":"1bd9d8811e1d8968ce7a9710d940dcbdf25dc34a67dc228615e705217f0f611f"} Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.715046 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xfps2" Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.717227 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" event={"ID":"7dbfecf3-a077-4d96-b7d5-d81b1c744194","Type":"ContainerStarted","Data":"86c2b819344540e1c2efb67fc21ddfe0a430a6064db6ef5fb80a6780a481e8c7"} Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.722976 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=25.626933639 podStartE2EDuration="30.722954962s" podCreationTimestamp="2026-01-27 22:09:05 +0000 UTC" firstStartedPulling="2026-01-27 22:09:27.542727448 +0000 UTC m=+1319.958749147" lastFinishedPulling="2026-01-27 22:09:32.638748781 +0000 UTC m=+1325.054770470" observedRunningTime="2026-01-27 22:09:35.703556079 +0000 UTC m=+1328.119577778" watchObservedRunningTime="2026-01-27 22:09:35.722954962 +0000 UTC m=+1328.138976671" Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.733358 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=31.733342341 podStartE2EDuration="31.733342341s" podCreationTimestamp="2026-01-27 22:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:09:35.729433076 +0000 UTC m=+1328.145454775" watchObservedRunningTime="2026-01-27 22:09:35.733342341 +0000 UTC m=+1328.149364030" Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.737651 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c78b382-5735-4741-b087-cefda68053f4","Type":"ContainerStarted","Data":"3abfa89db2c69b77e3243b70fc7639be8d55df5685260f5eaf42b68c83d1de7f"} Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.785176 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.001334546 podStartE2EDuration="33.785151657s" podCreationTimestamp="2026-01-27 22:09:02 +0000 UTC" firstStartedPulling="2026-01-27 22:09:13.040404692 +0000 UTC m=+1305.456426391" lastFinishedPulling="2026-01-27 22:09:27.824221803 +0000 UTC m=+1320.240243502" observedRunningTime="2026-01-27 22:09:35.771779267 +0000 UTC m=+1328.187800996" watchObservedRunningTime="2026-01-27 22:09:35.785151657 +0000 UTC m=+1328.201173356" Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.806356 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-zj24g" podStartSLOduration=22.869200445 podStartE2EDuration="27.806336868s" podCreationTimestamp="2026-01-27 22:09:08 +0000 UTC" firstStartedPulling="2026-01-27 22:09:28.89847263 +0000 UTC m=+1321.314494329" lastFinishedPulling="2026-01-27 22:09:33.835609053 +0000 UTC m=+1326.251630752" observedRunningTime="2026-01-27 22:09:35.789381171 +0000 UTC m=+1328.205402880" watchObservedRunningTime="2026-01-27 22:09:35.806336868 +0000 UTC m=+1328.222358567" Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.829145 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xfps2" podStartSLOduration=19.190441845 podStartE2EDuration="24.829118361s" podCreationTimestamp="2026-01-27 22:09:11 +0000 UTC" firstStartedPulling="2026-01-27 22:09:28.096541491 +0000 UTC m=+1320.512563190" lastFinishedPulling="2026-01-27 22:09:33.735218007 +0000 UTC m=+1326.151239706" observedRunningTime="2026-01-27 22:09:35.810760417 +0000 UTC m=+1328.226782136" watchObservedRunningTime="2026-01-27 22:09:35.829118361 +0000 UTC m=+1328.245140060" Jan 27 22:09:35 crc kubenswrapper[4803]: I0127 22:09:35.862220 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.917018417 podStartE2EDuration="28.862196333s" podCreationTimestamp="2026-01-27 22:09:07 +0000 UTC" firstStartedPulling="2026-01-27 22:09:28.960085261 +0000 UTC m=+1321.376106960" lastFinishedPulling="2026-01-27 22:09:34.905263177 +0000 UTC m=+1327.321284876" observedRunningTime="2026-01-27 22:09:35.833292835 +0000 UTC m=+1328.249314544" watchObservedRunningTime="2026-01-27 22:09:35.862196333 +0000 UTC m=+1328.278218042" Jan 27 22:09:36 crc kubenswrapper[4803]: I0127 22:09:36.129066 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:36 crc kubenswrapper[4803]: I0127 22:09:36.129477 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:36 crc kubenswrapper[4803]: I0127 22:09:36.737945 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:09:36 crc kubenswrapper[4803]: I0127 22:09:36.751631 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5ch2x" event={"ID":"302d32b5-3246-4bbc-877e-700ecd30afbd","Type":"ContainerStarted","Data":"de8bb0ed07c64a98bf5729e3313fc4b61a70d30d566c28d3000483305dd225d8"} Jan 27 22:09:36 crc kubenswrapper[4803]: I0127 22:09:36.751672 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5ch2x" event={"ID":"302d32b5-3246-4bbc-877e-700ecd30afbd","Type":"ContainerStarted","Data":"6407a3a94bd80741c0af33339b5e671b2a61afa1fe0aff8710a77e764c4ae8cd"} Jan 27 22:09:36 crc kubenswrapper[4803]: I0127 22:09:36.788244 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5ch2x" podStartSLOduration=21.355359994 podStartE2EDuration="25.788228887s" podCreationTimestamp="2026-01-27 22:09:11 +0000 UTC" firstStartedPulling="2026-01-27 22:09:29.302359895 +0000 UTC m=+1321.718381594" lastFinishedPulling="2026-01-27 22:09:33.735228788 +0000 UTC m=+1326.151250487" observedRunningTime="2026-01-27 22:09:36.78277902 +0000 UTC m=+1329.198800729" watchObservedRunningTime="2026-01-27 22:09:36.788228887 +0000 UTC m=+1329.204250586" Jan 27 22:09:36 crc kubenswrapper[4803]: I0127 22:09:36.920715 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:36 crc kubenswrapper[4803]: I0127 22:09:36.920925 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:09:37 crc kubenswrapper[4803]: I0127 22:09:37.763400 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"006465d9-12d6-4d2e-a02e-8a2669bdcbef","Type":"ContainerStarted","Data":"64a4d8d38614f6fe156a56ec2cc98eb8d14dedc403fe50c59b65d5eb8ed368ae"} Jan 27 22:09:39 crc kubenswrapper[4803]: I0127 22:09:39.602543 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:39 crc kubenswrapper[4803]: I0127 22:09:39.603196 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:39 crc kubenswrapper[4803]: I0127 22:09:39.609177 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:39 crc kubenswrapper[4803]: I0127 22:09:39.788202 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 22:09:39 crc kubenswrapper[4803]: I0127 22:09:39.852678 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-99c48dff5-sj7f4"] Jan 27 22:09:41 crc kubenswrapper[4803]: I0127 22:09:41.129798 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.251169 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.283983 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.284033 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.507990 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.544235 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.828550 4803 generic.go:334] "Generic (PLEG): container finished" podID="6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" containerID="02df4b9253b608f5320e86886a2bb564472768d47678c901d7d8f52eb00aaccd" exitCode=0 Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.828607 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" event={"ID":"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23","Type":"ContainerDied","Data":"02df4b9253b608f5320e86886a2bb564472768d47678c901d7d8f52eb00aaccd"} Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.831268 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8a269bc9-9bdc-4d66-b435-2ec777b4bdcd","Type":"ContainerStarted","Data":"b4dd768010b5f968fb6530422837f7165710759094599278dd0270a3d6928d22"} Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.833767 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cb0e5b16-8baa-435a-bae9-7b09e5602b43","Type":"ContainerStarted","Data":"45676b244abffd70c970d917ace98ccf86f24f554d2571a75643a974107ac936"} Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.835709 4803 generic.go:334] "Generic (PLEG): container finished" podID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerID="64a4d8d38614f6fe156a56ec2cc98eb8d14dedc403fe50c59b65d5eb8ed368ae" exitCode=0 Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.835757 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"006465d9-12d6-4d2e-a02e-8a2669bdcbef","Type":"ContainerDied","Data":"64a4d8d38614f6fe156a56ec2cc98eb8d14dedc403fe50c59b65d5eb8ed368ae"} Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.914322 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.629285556 podStartE2EDuration="32.914295532s" podCreationTimestamp="2026-01-27 22:09:12 +0000 UTC" firstStartedPulling="2026-01-27 22:09:28.960459532 +0000 UTC m=+1321.376481231" lastFinishedPulling="2026-01-27 22:09:44.245469508 +0000 UTC m=+1336.661491207" observedRunningTime="2026-01-27 22:09:44.890990443 +0000 UTC m=+1337.307012142" watchObservedRunningTime="2026-01-27 22:09:44.914295532 +0000 UTC m=+1337.330317231" Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.916982 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.052052467 podStartE2EDuration="31.916966154s" podCreationTimestamp="2026-01-27 22:09:13 +0000 UTC" firstStartedPulling="2026-01-27 22:09:29.300980387 +0000 UTC m=+1321.717002086" lastFinishedPulling="2026-01-27 22:09:44.165894074 +0000 UTC m=+1336.581915773" observedRunningTime="2026-01-27 22:09:44.910800717 +0000 UTC m=+1337.326822426" watchObservedRunningTime="2026-01-27 22:09:44.916966154 +0000 UTC m=+1337.332987853" Jan 27 22:09:44 crc kubenswrapper[4803]: I0127 22:09:44.970990 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.146546 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.146890 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.185945 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.805322 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4spv4"] Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.807774 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4spv4" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.815612 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1ee6-account-create-update-kwcbz"] Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.817164 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1ee6-account-create-update-kwcbz" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.818651 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.845926 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4spv4"] Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.858684 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" event={"ID":"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23","Type":"ContainerStarted","Data":"b6faf513cc9641b766669d6c4c20553ef8ddc1d9345a28498d487f8ebe3919c2"} Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.886982 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1ee6-account-create-update-kwcbz"] Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.903998 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" podStartSLOduration=-9223371991.9508 podStartE2EDuration="44.90397554s" podCreationTimestamp="2026-01-27 22:09:01 +0000 UTC" firstStartedPulling="2026-01-27 22:09:02.568275688 +0000 UTC m=+1294.984297387" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:09:45.879144161 +0000 UTC m=+1338.295165860" watchObservedRunningTime="2026-01-27 22:09:45.90397554 +0000 UTC m=+1338.319997239" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.913464 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.944065 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8912c649-5790-40b5-9fae-415ca9dbdc49-operator-scripts\") pod \"keystone-db-create-4spv4\" (UID: \"8912c649-5790-40b5-9fae-415ca9dbdc49\") " pod="openstack/keystone-db-create-4spv4" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.944117 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37db56ec-494a-417b-9435-a06c024bb779-operator-scripts\") pod \"keystone-1ee6-account-create-update-kwcbz\" (UID: \"37db56ec-494a-417b-9435-a06c024bb779\") " pod="openstack/keystone-1ee6-account-create-update-kwcbz" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.944168 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtkwl\" (UniqueName: \"kubernetes.io/projected/37db56ec-494a-417b-9435-a06c024bb779-kube-api-access-qtkwl\") pod \"keystone-1ee6-account-create-update-kwcbz\" (UID: \"37db56ec-494a-417b-9435-a06c024bb779\") " pod="openstack/keystone-1ee6-account-create-update-kwcbz" Jan 27 22:09:45 crc kubenswrapper[4803]: I0127 22:09:45.944445 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx78v\" (UniqueName: \"kubernetes.io/projected/8912c649-5790-40b5-9fae-415ca9dbdc49-kube-api-access-kx78v\") pod \"keystone-db-create-4spv4\" (UID: \"8912c649-5790-40b5-9fae-415ca9dbdc49\") " pod="openstack/keystone-db-create-4spv4" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.021035 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-jchsg"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.022867 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jchsg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.032084 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-jchsg"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.076203 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8912c649-5790-40b5-9fae-415ca9dbdc49-operator-scripts\") pod \"keystone-db-create-4spv4\" (UID: \"8912c649-5790-40b5-9fae-415ca9dbdc49\") " pod="openstack/keystone-db-create-4spv4" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.076259 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37db56ec-494a-417b-9435-a06c024bb779-operator-scripts\") pod \"keystone-1ee6-account-create-update-kwcbz\" (UID: \"37db56ec-494a-417b-9435-a06c024bb779\") " pod="openstack/keystone-1ee6-account-create-update-kwcbz" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.076319 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtkwl\" (UniqueName: \"kubernetes.io/projected/37db56ec-494a-417b-9435-a06c024bb779-kube-api-access-qtkwl\") pod \"keystone-1ee6-account-create-update-kwcbz\" (UID: \"37db56ec-494a-417b-9435-a06c024bb779\") " pod="openstack/keystone-1ee6-account-create-update-kwcbz" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.076419 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx78v\" (UniqueName: \"kubernetes.io/projected/8912c649-5790-40b5-9fae-415ca9dbdc49-kube-api-access-kx78v\") pod \"keystone-db-create-4spv4\" (UID: \"8912c649-5790-40b5-9fae-415ca9dbdc49\") " pod="openstack/keystone-db-create-4spv4" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.078353 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37db56ec-494a-417b-9435-a06c024bb779-operator-scripts\") pod \"keystone-1ee6-account-create-update-kwcbz\" (UID: \"37db56ec-494a-417b-9435-a06c024bb779\") " pod="openstack/keystone-1ee6-account-create-update-kwcbz" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.079629 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8912c649-5790-40b5-9fae-415ca9dbdc49-operator-scripts\") pod \"keystone-db-create-4spv4\" (UID: \"8912c649-5790-40b5-9fae-415ca9dbdc49\") " pod="openstack/keystone-db-create-4spv4" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.115652 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtkwl\" (UniqueName: \"kubernetes.io/projected/37db56ec-494a-417b-9435-a06c024bb779-kube-api-access-qtkwl\") pod \"keystone-1ee6-account-create-update-kwcbz\" (UID: \"37db56ec-494a-417b-9435-a06c024bb779\") " pod="openstack/keystone-1ee6-account-create-update-kwcbz" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.131954 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx78v\" (UniqueName: \"kubernetes.io/projected/8912c649-5790-40b5-9fae-415ca9dbdc49-kube-api-access-kx78v\") pod \"keystone-db-create-4spv4\" (UID: \"8912c649-5790-40b5-9fae-415ca9dbdc49\") " pod="openstack/keystone-db-create-4spv4" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.143187 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2724-account-create-update-5sfc9"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.146106 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2724-account-create-update-5sfc9" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.146336 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4spv4" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.150966 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.163479 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1ee6-account-create-update-kwcbz" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.175607 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2724-account-create-update-5sfc9"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.178678 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggv5z\" (UniqueName: \"kubernetes.io/projected/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-kube-api-access-ggv5z\") pod \"placement-db-create-jchsg\" (UID: \"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9\") " pod="openstack/placement-db-create-jchsg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.178803 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-operator-scripts\") pod \"placement-db-create-jchsg\" (UID: \"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9\") " pod="openstack/placement-db-create-jchsg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.215756 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8jgrp"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.252742 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-klhpg"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.254381 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.257026 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.289241 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brlbr\" (UniqueName: \"kubernetes.io/projected/11d4a04c-eaf3-4e09-912e-ca7b25918f30-kube-api-access-brlbr\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.289292 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-config\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.289457 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgx22\" (UniqueName: \"kubernetes.io/projected/f0f46bec-6bde-45cd-ad44-fb2399387ad7-kube-api-access-kgx22\") pod \"placement-2724-account-create-update-5sfc9\" (UID: \"f0f46bec-6bde-45cd-ad44-fb2399387ad7\") " pod="openstack/placement-2724-account-create-update-5sfc9" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.289487 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.289576 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggv5z\" (UniqueName: \"kubernetes.io/projected/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-kube-api-access-ggv5z\") pod \"placement-db-create-jchsg\" (UID: \"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9\") " pod="openstack/placement-db-create-jchsg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.289631 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.289685 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-operator-scripts\") pod \"placement-db-create-jchsg\" (UID: \"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9\") " pod="openstack/placement-db-create-jchsg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.289745 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0f46bec-6bde-45cd-ad44-fb2399387ad7-operator-scripts\") pod \"placement-2724-account-create-update-5sfc9\" (UID: \"f0f46bec-6bde-45cd-ad44-fb2399387ad7\") " pod="openstack/placement-2724-account-create-update-5sfc9" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.291034 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-operator-scripts\") pod \"placement-db-create-jchsg\" (UID: \"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9\") " pod="openstack/placement-db-create-jchsg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.294294 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-klhpg"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.313571 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggv5z\" (UniqueName: \"kubernetes.io/projected/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-kube-api-access-ggv5z\") pod \"placement-db-create-jchsg\" (UID: \"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9\") " pod="openstack/placement-db-create-jchsg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.340304 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-x2crv"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.342241 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.349281 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.352571 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-x2crv"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.391184 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brlbr\" (UniqueName: \"kubernetes.io/projected/11d4a04c-eaf3-4e09-912e-ca7b25918f30-kube-api-access-brlbr\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.392239 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-config\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.393519 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-config\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.400731 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgx22\" (UniqueName: \"kubernetes.io/projected/f0f46bec-6bde-45cd-ad44-fb2399387ad7-kube-api-access-kgx22\") pod \"placement-2724-account-create-update-5sfc9\" (UID: \"f0f46bec-6bde-45cd-ad44-fb2399387ad7\") " pod="openstack/placement-2724-account-create-update-5sfc9" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.400788 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.401056 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.401193 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0f46bec-6bde-45cd-ad44-fb2399387ad7-operator-scripts\") pod \"placement-2724-account-create-update-5sfc9\" (UID: \"f0f46bec-6bde-45cd-ad44-fb2399387ad7\") " pod="openstack/placement-2724-account-create-update-5sfc9" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.402618 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.402658 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0f46bec-6bde-45cd-ad44-fb2399387ad7-operator-scripts\") pod \"placement-2724-account-create-update-5sfc9\" (UID: \"f0f46bec-6bde-45cd-ad44-fb2399387ad7\") " pod="openstack/placement-2724-account-create-update-5sfc9" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.402711 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.403568 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jchsg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.424892 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brlbr\" (UniqueName: \"kubernetes.io/projected/11d4a04c-eaf3-4e09-912e-ca7b25918f30-kube-api-access-brlbr\") pod \"dnsmasq-dns-5bf47b49b7-klhpg\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.426676 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgx22\" (UniqueName: \"kubernetes.io/projected/f0f46bec-6bde-45cd-ad44-fb2399387ad7-kube-api-access-kgx22\") pod \"placement-2724-account-create-update-5sfc9\" (UID: \"f0f46bec-6bde-45cd-ad44-fb2399387ad7\") " pod="openstack/placement-2724-account-create-update-5sfc9" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.506203 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea119635-c5fa-46da-b030-9b0cbc93cfa8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.506247 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea119635-c5fa-46da-b030-9b0cbc93cfa8-combined-ca-bundle\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.506365 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbkzh\" (UniqueName: \"kubernetes.io/projected/ea119635-c5fa-46da-b030-9b0cbc93cfa8-kube-api-access-xbkzh\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.506431 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ea119635-c5fa-46da-b030-9b0cbc93cfa8-ovn-rundir\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.506454 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea119635-c5fa-46da-b030-9b0cbc93cfa8-config\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.506523 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ea119635-c5fa-46da-b030-9b0cbc93cfa8-ovs-rundir\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.582764 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2724-account-create-update-5sfc9" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.607957 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ea119635-c5fa-46da-b030-9b0cbc93cfa8-ovs-rundir\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.608022 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea119635-c5fa-46da-b030-9b0cbc93cfa8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.608042 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea119635-c5fa-46da-b030-9b0cbc93cfa8-combined-ca-bundle\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.608119 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbkzh\" (UniqueName: \"kubernetes.io/projected/ea119635-c5fa-46da-b030-9b0cbc93cfa8-kube-api-access-xbkzh\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.608188 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ea119635-c5fa-46da-b030-9b0cbc93cfa8-ovn-rundir\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.608211 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea119635-c5fa-46da-b030-9b0cbc93cfa8-config\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.608366 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ea119635-c5fa-46da-b030-9b0cbc93cfa8-ovs-rundir\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.608809 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ea119635-c5fa-46da-b030-9b0cbc93cfa8-ovn-rundir\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.609037 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea119635-c5fa-46da-b030-9b0cbc93cfa8-config\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.616384 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea119635-c5fa-46da-b030-9b0cbc93cfa8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.617538 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea119635-c5fa-46da-b030-9b0cbc93cfa8-combined-ca-bundle\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.636236 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbkzh\" (UniqueName: \"kubernetes.io/projected/ea119635-c5fa-46da-b030-9b0cbc93cfa8-kube-api-access-xbkzh\") pod \"ovn-controller-metrics-x2crv\" (UID: \"ea119635-c5fa-46da-b030-9b0cbc93cfa8\") " pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.675728 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-klhpg"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.677763 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.721351 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-x2crv" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.851074 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-4js9s"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.853417 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.856242 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.885976 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4js9s"] Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.912187 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" podUID="6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" containerName="dnsmasq-dns" containerID="cri-o://b6faf513cc9641b766669d6c4c20553ef8ddc1d9345a28498d487f8ebe3919c2" gracePeriod=10 Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.912681 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.929694 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24l6c\" (UniqueName: \"kubernetes.io/projected/7929350b-785c-4baa-b2e6-738687b211a8-kube-api-access-24l6c\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.931185 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.931516 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-config\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.931615 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.931734 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-dns-svc\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:46 crc kubenswrapper[4803]: I0127 22:09:46.951937 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4spv4"] Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.033982 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-config\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.034209 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.034351 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-dns-svc\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.034451 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24l6c\" (UniqueName: \"kubernetes.io/projected/7929350b-785c-4baa-b2e6-738687b211a8-kube-api-access-24l6c\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.034601 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.036392 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-config\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.036786 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-dns-svc\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.037382 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.037889 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.069942 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24l6c\" (UniqueName: \"kubernetes.io/projected/7929350b-785c-4baa-b2e6-738687b211a8-kube-api-access-24l6c\") pod \"dnsmasq-dns-8554648995-4js9s\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.096701 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1ee6-account-create-update-kwcbz"] Jan 27 22:09:47 crc kubenswrapper[4803]: W0127 22:09:47.147303 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37db56ec_494a_417b_9435_a06c024bb779.slice/crio-e213500f7e9b56347641f6aaa64914a83e6b0fa9639a800a18bda25f2df5983d WatchSource:0}: Error finding container e213500f7e9b56347641f6aaa64914a83e6b0fa9639a800a18bda25f2df5983d: Status 404 returned error can't find the container with id e213500f7e9b56347641f6aaa64914a83e6b0fa9639a800a18bda25f2df5983d Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.223201 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.260896 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.262038 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-jchsg"] Jan 27 22:09:47 crc kubenswrapper[4803]: W0127 22:09:47.278915 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda95a82c5_cf45_4dee_9891_d0bd2f0e95b9.slice/crio-1532d1df195bd497a753dda18d6e64bdcc5b20a70da30d664cf0b3ac9d262235 WatchSource:0}: Error finding container 1532d1df195bd497a753dda18d6e64bdcc5b20a70da30d664cf0b3ac9d262235: Status 404 returned error can't find the container with id 1532d1df195bd497a753dda18d6e64bdcc5b20a70da30d664cf0b3ac9d262235 Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.336819 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.501011 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2724-account-create-update-5sfc9"] Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.635581 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-klhpg"] Jan 27 22:09:47 crc kubenswrapper[4803]: W0127 22:09:47.691504 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11d4a04c_eaf3_4e09_912e_ca7b25918f30.slice/crio-716fc18ff73cce71c8689fa938632d5e7ffcc15b43e369957c2e49a8b7bac7e1 WatchSource:0}: Error finding container 716fc18ff73cce71c8689fa938632d5e7ffcc15b43e369957c2e49a8b7bac7e1: Status 404 returned error can't find the container with id 716fc18ff73cce71c8689fa938632d5e7ffcc15b43e369957c2e49a8b7bac7e1 Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.727682 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-x2crv"] Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.881453 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4js9s"] Jan 27 22:09:47 crc kubenswrapper[4803]: W0127 22:09:47.908324 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7929350b_785c_4baa_b2e6_738687b211a8.slice/crio-7baf85555f2c9148de6931e0e003f3b1211f086b691d955b99b43b42795b1796 WatchSource:0}: Error finding container 7baf85555f2c9148de6931e0e003f3b1211f086b691d955b99b43b42795b1796: Status 404 returned error can't find the container with id 7baf85555f2c9148de6931e0e003f3b1211f086b691d955b99b43b42795b1796 Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.966729 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-x2crv" event={"ID":"ea119635-c5fa-46da-b030-9b0cbc93cfa8","Type":"ContainerStarted","Data":"537f61d66eae0ae30920f49f14a876ca451dbf929d3e82679e7fc527119791a3"} Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.972117 4803 generic.go:334] "Generic (PLEG): container finished" podID="a95a82c5-cf45-4dee-9891-d0bd2f0e95b9" containerID="8c5bdc3a8dfeb255a2227f7412f17ef0966bebf8f441e6a50e045c2c990b2ae0" exitCode=0 Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.972213 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jchsg" event={"ID":"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9","Type":"ContainerDied","Data":"8c5bdc3a8dfeb255a2227f7412f17ef0966bebf8f441e6a50e045c2c990b2ae0"} Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.972248 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jchsg" event={"ID":"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9","Type":"ContainerStarted","Data":"1532d1df195bd497a753dda18d6e64bdcc5b20a70da30d664cf0b3ac9d262235"} Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.974193 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2724-account-create-update-5sfc9" event={"ID":"f0f46bec-6bde-45cd-ad44-fb2399387ad7","Type":"ContainerStarted","Data":"30050414a3f0db1cbaaf56b6c91afba96cd438cf759e21d4e8f0b753dc6453de"} Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.974236 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2724-account-create-update-5sfc9" event={"ID":"f0f46bec-6bde-45cd-ad44-fb2399387ad7","Type":"ContainerStarted","Data":"eb9c3479d26a356c475ac8de44320a45984298e7b50e49d8d2a97be371143378"} Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.983943 4803 generic.go:334] "Generic (PLEG): container finished" podID="37db56ec-494a-417b-9435-a06c024bb779" containerID="965a900e8bc48c23f0caaca6f059a51a611ae609cf84a2d72b10eb034185e1da" exitCode=0 Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.984012 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1ee6-account-create-update-kwcbz" event={"ID":"37db56ec-494a-417b-9435-a06c024bb779","Type":"ContainerDied","Data":"965a900e8bc48c23f0caaca6f059a51a611ae609cf84a2d72b10eb034185e1da"} Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.984036 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1ee6-account-create-update-kwcbz" event={"ID":"37db56ec-494a-417b-9435-a06c024bb779","Type":"ContainerStarted","Data":"e213500f7e9b56347641f6aaa64914a83e6b0fa9639a800a18bda25f2df5983d"} Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.985426 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.985591 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4js9s" event={"ID":"7929350b-785c-4baa-b2e6-738687b211a8","Type":"ContainerStarted","Data":"7baf85555f2c9148de6931e0e003f3b1211f086b691d955b99b43b42795b1796"} Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.987747 4803 generic.go:334] "Generic (PLEG): container finished" podID="8912c649-5790-40b5-9fae-415ca9dbdc49" containerID="5601a77173e43ec54783427c2ed21a7b96518f169e796d61f2ee7be8be7942db" exitCode=0 Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.987810 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4spv4" event={"ID":"8912c649-5790-40b5-9fae-415ca9dbdc49","Type":"ContainerDied","Data":"5601a77173e43ec54783427c2ed21a7b96518f169e796d61f2ee7be8be7942db"} Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.987824 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4spv4" event={"ID":"8912c649-5790-40b5-9fae-415ca9dbdc49","Type":"ContainerStarted","Data":"c313e0f49893e4dc3eb5d6026e49169141bd2efd89c8e0135fdfd8e867f8abad"} Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.999418 4803 generic.go:334] "Generic (PLEG): container finished" podID="6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" containerID="b6faf513cc9641b766669d6c4c20553ef8ddc1d9345a28498d487f8ebe3919c2" exitCode=0 Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.999508 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" event={"ID":"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23","Type":"ContainerDied","Data":"b6faf513cc9641b766669d6c4c20553ef8ddc1d9345a28498d487f8ebe3919c2"} Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.999524 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8jgrp" Jan 27 22:09:47 crc kubenswrapper[4803]: I0127 22:09:47.999542 4803 scope.go:117] "RemoveContainer" containerID="b6faf513cc9641b766669d6c4c20553ef8ddc1d9345a28498d487f8ebe3919c2" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.002893 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" event={"ID":"11d4a04c-eaf3-4e09-912e-ca7b25918f30","Type":"ContainerStarted","Data":"716fc18ff73cce71c8689fa938632d5e7ffcc15b43e369957c2e49a8b7bac7e1"} Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.004331 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.034768 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-2724-account-create-update-5sfc9" podStartSLOduration=2.0345770930000002 podStartE2EDuration="2.034577093s" podCreationTimestamp="2026-01-27 22:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:09:48.026489246 +0000 UTC m=+1340.442510945" watchObservedRunningTime="2026-01-27 22:09:48.034577093 +0000 UTC m=+1340.450598792" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.046331 4803 scope.go:117] "RemoveContainer" containerID="02df4b9253b608f5320e86886a2bb564472768d47678c901d7d8f52eb00aaccd" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.096201 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fd2t7"] Jan 27 22:09:48 crc kubenswrapper[4803]: E0127 22:09:48.096827 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" containerName="init" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.096845 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" containerName="init" Jan 27 22:09:48 crc kubenswrapper[4803]: E0127 22:09:48.096884 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" containerName="dnsmasq-dns" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.096892 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" containerName="dnsmasq-dns" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.097084 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" containerName="dnsmasq-dns" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.098777 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.112626 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.116639 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fd2t7"] Jan 27 22:09:48 crc kubenswrapper[4803]: E0127 22:09:48.122995 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11d4a04c_eaf3_4e09_912e_ca7b25918f30.slice/crio-conmon-23592c751da9867e2c31df41c8f1b430bd7d747d3e73d33a5d0e7858866fe90f.scope\": RecentStats: unable to find data in memory cache]" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.169848 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6wsq\" (UniqueName: \"kubernetes.io/projected/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-kube-api-access-x6wsq\") pod \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.174354 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-dns-svc\") pod \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.174469 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-config\") pod \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\" (UID: \"6a75dbc6-2f5d-47c1-96f4-4af86d4ead23\") " Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.190691 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-kube-api-access-x6wsq" (OuterVolumeSpecName: "kube-api-access-x6wsq") pod "6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" (UID: "6a75dbc6-2f5d-47c1-96f4-4af86d4ead23"). InnerVolumeSpecName "kube-api-access-x6wsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.281052 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc3416a2-788e-417e-9f0e-07f4d5b3c180-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fd2t7\" (UID: \"cc3416a2-788e-417e-9f0e-07f4d5b3c180\") " pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.281127 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx95v\" (UniqueName: \"kubernetes.io/projected/cc3416a2-788e-417e-9f0e-07f4d5b3c180-kube-api-access-bx95v\") pod \"mysqld-exporter-openstack-db-create-fd2t7\" (UID: \"cc3416a2-788e-417e-9f0e-07f4d5b3c180\") " pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.281242 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6wsq\" (UniqueName: \"kubernetes.io/projected/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-kube-api-access-x6wsq\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.373312 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-config" (OuterVolumeSpecName: "config") pod "6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" (UID: "6a75dbc6-2f5d-47c1-96f4-4af86d4ead23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.392404 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" (UID: "6a75dbc6-2f5d-47c1-96f4-4af86d4ead23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.442408 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx95v\" (UniqueName: \"kubernetes.io/projected/cc3416a2-788e-417e-9f0e-07f4d5b3c180-kube-api-access-bx95v\") pod \"mysqld-exporter-openstack-db-create-fd2t7\" (UID: \"cc3416a2-788e-417e-9f0e-07f4d5b3c180\") " pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.443690 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc3416a2-788e-417e-9f0e-07f4d5b3c180-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fd2t7\" (UID: \"cc3416a2-788e-417e-9f0e-07f4d5b3c180\") " pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.443954 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc3416a2-788e-417e-9f0e-07f4d5b3c180-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fd2t7\" (UID: \"cc3416a2-788e-417e-9f0e-07f4d5b3c180\") " pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.443968 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.444010 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.451743 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4js9s"] Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.451825 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.494574 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx95v\" (UniqueName: \"kubernetes.io/projected/cc3416a2-788e-417e-9f0e-07f4d5b3c180-kube-api-access-bx95v\") pod \"mysqld-exporter-openstack-db-create-fd2t7\" (UID: \"cc3416a2-788e-417e-9f0e-07f4d5b3c180\") " pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.531071 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pjgqn"] Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.535825 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.537326 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.561016 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pjgqn"] Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.580342 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-c39a-account-create-update-k2wlf"] Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.581917 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.584599 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.603940 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c39a-account-create-update-k2wlf"] Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.648508 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78s8x\" (UniqueName: \"kubernetes.io/projected/4fafdbaa-01ec-42c3-afd2-5416c549677f-kube-api-access-78s8x\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.648595 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.648651 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-config\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.648694 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.648727 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96cdfdbb-1c49-46a6-b901-147ad561f0e6-operator-scripts\") pod \"mysqld-exporter-c39a-account-create-update-k2wlf\" (UID: \"96cdfdbb-1c49-46a6-b901-147ad561f0e6\") " pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.648772 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mxt\" (UniqueName: \"kubernetes.io/projected/96cdfdbb-1c49-46a6-b901-147ad561f0e6-kube-api-access-78mxt\") pod \"mysqld-exporter-c39a-account-create-update-k2wlf\" (UID: \"96cdfdbb-1c49-46a6-b901-147ad561f0e6\") " pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.648798 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.655229 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.656941 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.662879 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.663623 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.663970 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-b5pl4" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.665836 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.694325 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.739208 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8jgrp"] Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.747792 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8jgrp"] Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.753560 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85bcae-8159-430e-bf60-b94ca19c4131-config\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.753609 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhrjl\" (UniqueName: \"kubernetes.io/projected/5a85bcae-8159-430e-bf60-b94ca19c4131-kube-api-access-dhrjl\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.753640 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a85bcae-8159-430e-bf60-b94ca19c4131-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.753673 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.753742 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96cdfdbb-1c49-46a6-b901-147ad561f0e6-operator-scripts\") pod \"mysqld-exporter-c39a-account-create-update-k2wlf\" (UID: \"96cdfdbb-1c49-46a6-b901-147ad561f0e6\") " pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.753775 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a85bcae-8159-430e-bf60-b94ca19c4131-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.753807 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78mxt\" (UniqueName: \"kubernetes.io/projected/96cdfdbb-1c49-46a6-b901-147ad561f0e6-kube-api-access-78mxt\") pod \"mysqld-exporter-c39a-account-create-update-k2wlf\" (UID: \"96cdfdbb-1c49-46a6-b901-147ad561f0e6\") " pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.753824 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85bcae-8159-430e-bf60-b94ca19c4131-scripts\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.753879 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.753942 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78s8x\" (UniqueName: \"kubernetes.io/projected/4fafdbaa-01ec-42c3-afd2-5416c549677f-kube-api-access-78s8x\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.755111 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a85bcae-8159-430e-bf60-b94ca19c4131-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.758292 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.758591 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-config\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.758642 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a85bcae-8159-430e-bf60-b94ca19c4131-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.760322 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.760448 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-config\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.764095 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.768802 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96cdfdbb-1c49-46a6-b901-147ad561f0e6-operator-scripts\") pod \"mysqld-exporter-c39a-account-create-update-k2wlf\" (UID: \"96cdfdbb-1c49-46a6-b901-147ad561f0e6\") " pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.757840 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.779027 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78mxt\" (UniqueName: \"kubernetes.io/projected/96cdfdbb-1c49-46a6-b901-147ad561f0e6-kube-api-access-78mxt\") pod \"mysqld-exporter-c39a-account-create-update-k2wlf\" (UID: \"96cdfdbb-1c49-46a6-b901-147ad561f0e6\") " pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.779323 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78s8x\" (UniqueName: \"kubernetes.io/projected/4fafdbaa-01ec-42c3-afd2-5416c549677f-kube-api-access-78s8x\") pod \"dnsmasq-dns-b8fbc5445-pjgqn\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.861579 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a85bcae-8159-430e-bf60-b94ca19c4131-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.861690 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a85bcae-8159-430e-bf60-b94ca19c4131-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.861718 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85bcae-8159-430e-bf60-b94ca19c4131-config\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.862032 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhrjl\" (UniqueName: \"kubernetes.io/projected/5a85bcae-8159-430e-bf60-b94ca19c4131-kube-api-access-dhrjl\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.862061 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a85bcae-8159-430e-bf60-b94ca19c4131-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.862129 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a85bcae-8159-430e-bf60-b94ca19c4131-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.862155 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85bcae-8159-430e-bf60-b94ca19c4131-scripts\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.862947 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85bcae-8159-430e-bf60-b94ca19c4131-config\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.862972 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a85bcae-8159-430e-bf60-b94ca19c4131-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.862981 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85bcae-8159-430e-bf60-b94ca19c4131-scripts\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.865427 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a85bcae-8159-430e-bf60-b94ca19c4131-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.865581 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a85bcae-8159-430e-bf60-b94ca19c4131-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.867398 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a85bcae-8159-430e-bf60-b94ca19c4131-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.883509 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhrjl\" (UniqueName: \"kubernetes.io/projected/5a85bcae-8159-430e-bf60-b94ca19c4131-kube-api-access-dhrjl\") pod \"ovn-northd-0\" (UID: \"5a85bcae-8159-430e-bf60-b94ca19c4131\") " pod="openstack/ovn-northd-0" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.883817 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.904159 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" Jan 27 22:09:48 crc kubenswrapper[4803]: I0127 22:09:48.978895 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.020598 4803 generic.go:334] "Generic (PLEG): container finished" podID="11d4a04c-eaf3-4e09-912e-ca7b25918f30" containerID="23592c751da9867e2c31df41c8f1b430bd7d747d3e73d33a5d0e7858866fe90f" exitCode=0 Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.020685 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" event={"ID":"11d4a04c-eaf3-4e09-912e-ca7b25918f30","Type":"ContainerDied","Data":"23592c751da9867e2c31df41c8f1b430bd7d747d3e73d33a5d0e7858866fe90f"} Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.029793 4803 generic.go:334] "Generic (PLEG): container finished" podID="f0f46bec-6bde-45cd-ad44-fb2399387ad7" containerID="30050414a3f0db1cbaaf56b6c91afba96cd438cf759e21d4e8f0b753dc6453de" exitCode=0 Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.029895 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2724-account-create-update-5sfc9" event={"ID":"f0f46bec-6bde-45cd-ad44-fb2399387ad7","Type":"ContainerDied","Data":"30050414a3f0db1cbaaf56b6c91afba96cd438cf759e21d4e8f0b753dc6453de"} Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.051507 4803 generic.go:334] "Generic (PLEG): container finished" podID="7929350b-785c-4baa-b2e6-738687b211a8" containerID="6953bc2e4bf7f35699da58cf7ea7752ef6858a32e3a74efea2a8d032e0f78760" exitCode=0 Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.051701 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4js9s" event={"ID":"7929350b-785c-4baa-b2e6-738687b211a8","Type":"ContainerDied","Data":"6953bc2e4bf7f35699da58cf7ea7752ef6858a32e3a74efea2a8d032e0f78760"} Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.113148 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-x2crv" event={"ID":"ea119635-c5fa-46da-b030-9b0cbc93cfa8","Type":"ContainerStarted","Data":"2c2cdb041d7a03224a65a12b397e43b8d9714261875b30b3ed65483ee00d7bec"} Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.141190 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-x2crv" podStartSLOduration=3.141167363 podStartE2EDuration="3.141167363s" podCreationTimestamp="2026-01-27 22:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:09:49.138370147 +0000 UTC m=+1341.554391846" watchObservedRunningTime="2026-01-27 22:09:49.141167363 +0000 UTC m=+1341.557189062" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.141769 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fd2t7"] Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.395238 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.403614 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.410116 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.415535 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.415746 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.415978 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-whwnp" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.598002 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/72f06f5c-7c0f-4969-89a2-b16210f935c4-cache\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.598606 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.598651 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72f06f5c-7c0f-4969-89a2-b16210f935c4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.598675 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/72f06f5c-7c0f-4969-89a2-b16210f935c4-lock\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.598737 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4cc2a10d-76e7-464b-b43f-2331022cdc26\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4cc2a10d-76e7-464b-b43f-2331022cdc26\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.598779 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xppf\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-kube-api-access-9xppf\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.672028 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.700792 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.700886 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72f06f5c-7c0f-4969-89a2-b16210f935c4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.700921 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/72f06f5c-7c0f-4969-89a2-b16210f935c4-lock\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.701010 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4cc2a10d-76e7-464b-b43f-2331022cdc26\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4cc2a10d-76e7-464b-b43f-2331022cdc26\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.701064 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xppf\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-kube-api-access-9xppf\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.701098 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/72f06f5c-7c0f-4969-89a2-b16210f935c4-cache\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.701557 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/72f06f5c-7c0f-4969-89a2-b16210f935c4-cache\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: E0127 22:09:49.701676 4803 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 22:09:49 crc kubenswrapper[4803]: E0127 22:09:49.701690 4803 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 22:09:49 crc kubenswrapper[4803]: E0127 22:09:49.701728 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift podName:72f06f5c-7c0f-4969-89a2-b16210f935c4 nodeName:}" failed. No retries permitted until 2026-01-27 22:09:50.201713259 +0000 UTC m=+1342.617734958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift") pod "swift-storage-0" (UID: "72f06f5c-7c0f-4969-89a2-b16210f935c4") : configmap "swift-ring-files" not found Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.703115 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/72f06f5c-7c0f-4969-89a2-b16210f935c4-lock\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.708872 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.708906 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4cc2a10d-76e7-464b-b43f-2331022cdc26\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4cc2a10d-76e7-464b-b43f-2331022cdc26\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a7a791de92b1279f872e0cdfccd669a14be2b4ab3278cbca809e49f5816e2eda/globalmount\"" pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.718960 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72f06f5c-7c0f-4969-89a2-b16210f935c4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.744030 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xppf\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-kube-api-access-9xppf\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:49 crc kubenswrapper[4803]: I0127 22:09:49.865667 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4cc2a10d-76e7-464b-b43f-2331022cdc26\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4cc2a10d-76e7-464b-b43f-2331022cdc26\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.073136 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.120006 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4js9s" event={"ID":"7929350b-785c-4baa-b2e6-738687b211a8","Type":"ContainerDied","Data":"7baf85555f2c9148de6931e0e003f3b1211f086b691d955b99b43b42795b1796"} Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.120051 4803 scope.go:117] "RemoveContainer" containerID="6953bc2e4bf7f35699da58cf7ea7752ef6858a32e3a74efea2a8d032e0f78760" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.120172 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4js9s" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.124368 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" event={"ID":"cc3416a2-788e-417e-9f0e-07f4d5b3c180","Type":"ContainerStarted","Data":"f3a66a1868cce646d06342d1a2fcc9c8ea2806a246dd86c4d11a630830a5a44a"} Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.124396 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" event={"ID":"cc3416a2-788e-417e-9f0e-07f4d5b3c180","Type":"ContainerStarted","Data":"4c4d3d961453dbbb2aeeb3d1230ffb07b63396c7c234ce294366149df1680c9f"} Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.159545 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" podStartSLOduration=2.15932101 podStartE2EDuration="2.15932101s" podCreationTimestamp="2026-01-27 22:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:09:50.151317624 +0000 UTC m=+1342.567339333" watchObservedRunningTime="2026-01-27 22:09:50.15932101 +0000 UTC m=+1342.575342709" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.215305 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-dns-svc\") pod \"7929350b-785c-4baa-b2e6-738687b211a8\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.216094 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-sb\") pod \"7929350b-785c-4baa-b2e6-738687b211a8\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.216200 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24l6c\" (UniqueName: \"kubernetes.io/projected/7929350b-785c-4baa-b2e6-738687b211a8-kube-api-access-24l6c\") pod \"7929350b-785c-4baa-b2e6-738687b211a8\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.216237 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-nb\") pod \"7929350b-785c-4baa-b2e6-738687b211a8\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.216266 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-config\") pod \"7929350b-785c-4baa-b2e6-738687b211a8\" (UID: \"7929350b-785c-4baa-b2e6-738687b211a8\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.216666 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:50 crc kubenswrapper[4803]: E0127 22:09:50.218246 4803 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 22:09:50 crc kubenswrapper[4803]: E0127 22:09:50.218469 4803 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 22:09:50 crc kubenswrapper[4803]: E0127 22:09:50.218517 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift podName:72f06f5c-7c0f-4969-89a2-b16210f935c4 nodeName:}" failed. No retries permitted until 2026-01-27 22:09:51.218502514 +0000 UTC m=+1343.634524213 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift") pod "swift-storage-0" (UID: "72f06f5c-7c0f-4969-89a2-b16210f935c4") : configmap "swift-ring-files" not found Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.222063 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7929350b-785c-4baa-b2e6-738687b211a8-kube-api-access-24l6c" (OuterVolumeSpecName: "kube-api-access-24l6c") pod "7929350b-785c-4baa-b2e6-738687b211a8" (UID: "7929350b-785c-4baa-b2e6-738687b211a8"). InnerVolumeSpecName "kube-api-access-24l6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.245300 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7929350b-785c-4baa-b2e6-738687b211a8" (UID: "7929350b-785c-4baa-b2e6-738687b211a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.249999 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7929350b-785c-4baa-b2e6-738687b211a8" (UID: "7929350b-785c-4baa-b2e6-738687b211a8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.264264 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-config" (OuterVolumeSpecName: "config") pod "7929350b-785c-4baa-b2e6-738687b211a8" (UID: "7929350b-785c-4baa-b2e6-738687b211a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.275518 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7929350b-785c-4baa-b2e6-738687b211a8" (UID: "7929350b-785c-4baa-b2e6-738687b211a8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.319654 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.319691 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.319708 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24l6c\" (UniqueName: \"kubernetes.io/projected/7929350b-785c-4baa-b2e6-738687b211a8-kube-api-access-24l6c\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.319720 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.319733 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7929350b-785c-4baa-b2e6-738687b211a8-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.337982 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a75dbc6-2f5d-47c1-96f4-4af86d4ead23" path="/var/lib/kubelet/pods/6a75dbc6-2f5d-47c1-96f4-4af86d4ead23/volumes" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.477617 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jchsg" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.488190 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1ee6-account-create-update-kwcbz" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.509320 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4spv4" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.516978 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.527236 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4js9s"] Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.535363 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4js9s"] Jan 27 22:09:50 crc kubenswrapper[4803]: W0127 22:09:50.599344 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fafdbaa_01ec_42c3_afd2_5416c549677f.slice/crio-1d2d20c6c9231e641feb8d1c2148ef44003e1edfc43368a479185b81e205264c WatchSource:0}: Error finding container 1d2d20c6c9231e641feb8d1c2148ef44003e1edfc43368a479185b81e205264c: Status 404 returned error can't find the container with id 1d2d20c6c9231e641feb8d1c2148ef44003e1edfc43368a479185b81e205264c Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.609409 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pjgqn"] Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.628124 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-ovsdbserver-nb\") pod \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.628188 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37db56ec-494a-417b-9435-a06c024bb779-operator-scripts\") pod \"37db56ec-494a-417b-9435-a06c024bb779\" (UID: \"37db56ec-494a-417b-9435-a06c024bb779\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.628237 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggv5z\" (UniqueName: \"kubernetes.io/projected/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-kube-api-access-ggv5z\") pod \"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9\" (UID: \"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.628343 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8912c649-5790-40b5-9fae-415ca9dbdc49-operator-scripts\") pod \"8912c649-5790-40b5-9fae-415ca9dbdc49\" (UID: \"8912c649-5790-40b5-9fae-415ca9dbdc49\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.628375 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-dns-svc\") pod \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.628406 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx78v\" (UniqueName: \"kubernetes.io/projected/8912c649-5790-40b5-9fae-415ca9dbdc49-kube-api-access-kx78v\") pod \"8912c649-5790-40b5-9fae-415ca9dbdc49\" (UID: \"8912c649-5790-40b5-9fae-415ca9dbdc49\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.628451 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-config\") pod \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.628485 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-operator-scripts\") pod \"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9\" (UID: \"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.628538 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtkwl\" (UniqueName: \"kubernetes.io/projected/37db56ec-494a-417b-9435-a06c024bb779-kube-api-access-qtkwl\") pod \"37db56ec-494a-417b-9435-a06c024bb779\" (UID: \"37db56ec-494a-417b-9435-a06c024bb779\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.628554 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brlbr\" (UniqueName: \"kubernetes.io/projected/11d4a04c-eaf3-4e09-912e-ca7b25918f30-kube-api-access-brlbr\") pod \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\" (UID: \"11d4a04c-eaf3-4e09-912e-ca7b25918f30\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.632301 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37db56ec-494a-417b-9435-a06c024bb779-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37db56ec-494a-417b-9435-a06c024bb779" (UID: "37db56ec-494a-417b-9435-a06c024bb779"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.634415 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8912c649-5790-40b5-9fae-415ca9dbdc49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8912c649-5790-40b5-9fae-415ca9dbdc49" (UID: "8912c649-5790-40b5-9fae-415ca9dbdc49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.635451 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a95a82c5-cf45-4dee-9891-d0bd2f0e95b9" (UID: "a95a82c5-cf45-4dee-9891-d0bd2f0e95b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.636377 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8912c649-5790-40b5-9fae-415ca9dbdc49-kube-api-access-kx78v" (OuterVolumeSpecName: "kube-api-access-kx78v") pod "8912c649-5790-40b5-9fae-415ca9dbdc49" (UID: "8912c649-5790-40b5-9fae-415ca9dbdc49"). InnerVolumeSpecName "kube-api-access-kx78v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.636409 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d4a04c-eaf3-4e09-912e-ca7b25918f30-kube-api-access-brlbr" (OuterVolumeSpecName: "kube-api-access-brlbr") pod "11d4a04c-eaf3-4e09-912e-ca7b25918f30" (UID: "11d4a04c-eaf3-4e09-912e-ca7b25918f30"). InnerVolumeSpecName "kube-api-access-brlbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.636524 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-kube-api-access-ggv5z" (OuterVolumeSpecName: "kube-api-access-ggv5z") pod "a95a82c5-cf45-4dee-9891-d0bd2f0e95b9" (UID: "a95a82c5-cf45-4dee-9891-d0bd2f0e95b9"). InnerVolumeSpecName "kube-api-access-ggv5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.642942 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37db56ec-494a-417b-9435-a06c024bb779-kube-api-access-qtkwl" (OuterVolumeSpecName: "kube-api-access-qtkwl") pod "37db56ec-494a-417b-9435-a06c024bb779" (UID: "37db56ec-494a-417b-9435-a06c024bb779"). InnerVolumeSpecName "kube-api-access-qtkwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.661233 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "11d4a04c-eaf3-4e09-912e-ca7b25918f30" (UID: "11d4a04c-eaf3-4e09-912e-ca7b25918f30"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.701450 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11d4a04c-eaf3-4e09-912e-ca7b25918f30" (UID: "11d4a04c-eaf3-4e09-912e-ca7b25918f30"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.707492 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-config" (OuterVolumeSpecName: "config") pod "11d4a04c-eaf3-4e09-912e-ca7b25918f30" (UID: "11d4a04c-eaf3-4e09-912e-ca7b25918f30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.737995 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.738378 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2724-account-create-update-5sfc9" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.738876 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37db56ec-494a-417b-9435-a06c024bb779-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.738995 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggv5z\" (UniqueName: \"kubernetes.io/projected/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-kube-api-access-ggv5z\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.739008 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8912c649-5790-40b5-9fae-415ca9dbdc49-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.739018 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.739028 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx78v\" (UniqueName: \"kubernetes.io/projected/8912c649-5790-40b5-9fae-415ca9dbdc49-kube-api-access-kx78v\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.739039 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d4a04c-eaf3-4e09-912e-ca7b25918f30-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.739054 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.739063 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brlbr\" (UniqueName: \"kubernetes.io/projected/11d4a04c-eaf3-4e09-912e-ca7b25918f30-kube-api-access-brlbr\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.739429 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtkwl\" (UniqueName: \"kubernetes.io/projected/37db56ec-494a-417b-9435-a06c024bb779-kube-api-access-qtkwl\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.840800 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgx22\" (UniqueName: \"kubernetes.io/projected/f0f46bec-6bde-45cd-ad44-fb2399387ad7-kube-api-access-kgx22\") pod \"f0f46bec-6bde-45cd-ad44-fb2399387ad7\" (UID: \"f0f46bec-6bde-45cd-ad44-fb2399387ad7\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.840999 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0f46bec-6bde-45cd-ad44-fb2399387ad7-operator-scripts\") pod \"f0f46bec-6bde-45cd-ad44-fb2399387ad7\" (UID: \"f0f46bec-6bde-45cd-ad44-fb2399387ad7\") " Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.841722 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f46bec-6bde-45cd-ad44-fb2399387ad7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0f46bec-6bde-45cd-ad44-fb2399387ad7" (UID: "f0f46bec-6bde-45cd-ad44-fb2399387ad7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.845827 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f46bec-6bde-45cd-ad44-fb2399387ad7-kube-api-access-kgx22" (OuterVolumeSpecName: "kube-api-access-kgx22") pod "f0f46bec-6bde-45cd-ad44-fb2399387ad7" (UID: "f0f46bec-6bde-45cd-ad44-fb2399387ad7"). InnerVolumeSpecName "kube-api-access-kgx22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.943562 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0f46bec-6bde-45cd-ad44-fb2399387ad7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:50 crc kubenswrapper[4803]: I0127 22:09:50.943590 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgx22\" (UniqueName: \"kubernetes.io/projected/f0f46bec-6bde-45cd-ad44-fb2399387ad7-kube-api-access-kgx22\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.015404 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c39a-account-create-update-k2wlf"] Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.043137 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.163186 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4spv4" event={"ID":"8912c649-5790-40b5-9fae-415ca9dbdc49","Type":"ContainerDied","Data":"c313e0f49893e4dc3eb5d6026e49169141bd2efd89c8e0135fdfd8e867f8abad"} Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.163450 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c313e0f49893e4dc3eb5d6026e49169141bd2efd89c8e0135fdfd8e867f8abad" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.163389 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4spv4" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.177926 4803 generic.go:334] "Generic (PLEG): container finished" podID="4fafdbaa-01ec-42c3-afd2-5416c549677f" containerID="93f4de8c6d35f9da04b8ff5eab3d3b670f3808cac5a50bcf1436e452ed54f499" exitCode=0 Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.178000 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" event={"ID":"4fafdbaa-01ec-42c3-afd2-5416c549677f","Type":"ContainerDied","Data":"93f4de8c6d35f9da04b8ff5eab3d3b670f3808cac5a50bcf1436e452ed54f499"} Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.178024 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" event={"ID":"4fafdbaa-01ec-42c3-afd2-5416c549677f","Type":"ContainerStarted","Data":"1d2d20c6c9231e641feb8d1c2148ef44003e1edfc43368a479185b81e205264c"} Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.193823 4803 generic.go:334] "Generic (PLEG): container finished" podID="cc3416a2-788e-417e-9f0e-07f4d5b3c180" containerID="f3a66a1868cce646d06342d1a2fcc9c8ea2806a246dd86c4d11a630830a5a44a" exitCode=0 Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.193906 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" event={"ID":"cc3416a2-788e-417e-9f0e-07f4d5b3c180","Type":"ContainerDied","Data":"f3a66a1868cce646d06342d1a2fcc9c8ea2806a246dd86c4d11a630830a5a44a"} Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.211378 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5a85bcae-8159-430e-bf60-b94ca19c4131","Type":"ContainerStarted","Data":"0f5c14fef0edcdbaa8269b5db683198073873c20f7dfaae07b35324d32cfb08a"} Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.213017 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2724-account-create-update-5sfc9" event={"ID":"f0f46bec-6bde-45cd-ad44-fb2399387ad7","Type":"ContainerDied","Data":"eb9c3479d26a356c475ac8de44320a45984298e7b50e49d8d2a97be371143378"} Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.213034 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb9c3479d26a356c475ac8de44320a45984298e7b50e49d8d2a97be371143378" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.213090 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2724-account-create-update-5sfc9" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.223442 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1ee6-account-create-update-kwcbz" event={"ID":"37db56ec-494a-417b-9435-a06c024bb779","Type":"ContainerDied","Data":"e213500f7e9b56347641f6aaa64914a83e6b0fa9639a800a18bda25f2df5983d"} Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.223484 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e213500f7e9b56347641f6aaa64914a83e6b0fa9639a800a18bda25f2df5983d" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.223543 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1ee6-account-create-update-kwcbz" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.238317 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" event={"ID":"96cdfdbb-1c49-46a6-b901-147ad561f0e6","Type":"ContainerStarted","Data":"c1ae611aeac752d10f0df6ddf701579392f13f08ccb7316a0b66745b7107868c"} Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.255685 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.257091 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" event={"ID":"11d4a04c-eaf3-4e09-912e-ca7b25918f30","Type":"ContainerDied","Data":"716fc18ff73cce71c8689fa938632d5e7ffcc15b43e369957c2e49a8b7bac7e1"} Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.258450 4803 scope.go:117] "RemoveContainer" containerID="23592c751da9867e2c31df41c8f1b430bd7d747d3e73d33a5d0e7858866fe90f" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.263171 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jchsg" event={"ID":"a95a82c5-cf45-4dee-9891-d0bd2f0e95b9","Type":"ContainerDied","Data":"1532d1df195bd497a753dda18d6e64bdcc5b20a70da30d664cf0b3ac9d262235"} Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.263215 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1532d1df195bd497a753dda18d6e64bdcc5b20a70da30d664cf0b3ac9d262235" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.263312 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jchsg" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.277186 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-klhpg" Jan 27 22:09:51 crc kubenswrapper[4803]: E0127 22:09:51.286785 4803 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 22:09:51 crc kubenswrapper[4803]: E0127 22:09:51.287007 4803 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 22:09:51 crc kubenswrapper[4803]: E0127 22:09:51.287136 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift podName:72f06f5c-7c0f-4969-89a2-b16210f935c4 nodeName:}" failed. No retries permitted until 2026-01-27 22:09:53.28710978 +0000 UTC m=+1345.703131479 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift") pod "swift-storage-0" (UID: "72f06f5c-7c0f-4969-89a2-b16210f935c4") : configmap "swift-ring-files" not found Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.410716 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-qq4dz"] Jan 27 22:09:51 crc kubenswrapper[4803]: E0127 22:09:51.411210 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d4a04c-eaf3-4e09-912e-ca7b25918f30" containerName="init" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411232 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d4a04c-eaf3-4e09-912e-ca7b25918f30" containerName="init" Jan 27 22:09:51 crc kubenswrapper[4803]: E0127 22:09:51.411249 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7929350b-785c-4baa-b2e6-738687b211a8" containerName="init" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411255 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="7929350b-785c-4baa-b2e6-738687b211a8" containerName="init" Jan 27 22:09:51 crc kubenswrapper[4803]: E0127 22:09:51.411272 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f46bec-6bde-45cd-ad44-fb2399387ad7" containerName="mariadb-account-create-update" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411278 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f46bec-6bde-45cd-ad44-fb2399387ad7" containerName="mariadb-account-create-update" Jan 27 22:09:51 crc kubenswrapper[4803]: E0127 22:09:51.411292 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a95a82c5-cf45-4dee-9891-d0bd2f0e95b9" containerName="mariadb-database-create" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411299 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a95a82c5-cf45-4dee-9891-d0bd2f0e95b9" containerName="mariadb-database-create" Jan 27 22:09:51 crc kubenswrapper[4803]: E0127 22:09:51.411322 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8912c649-5790-40b5-9fae-415ca9dbdc49" containerName="mariadb-database-create" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411329 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8912c649-5790-40b5-9fae-415ca9dbdc49" containerName="mariadb-database-create" Jan 27 22:09:51 crc kubenswrapper[4803]: E0127 22:09:51.411338 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37db56ec-494a-417b-9435-a06c024bb779" containerName="mariadb-account-create-update" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411345 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="37db56ec-494a-417b-9435-a06c024bb779" containerName="mariadb-account-create-update" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411813 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a95a82c5-cf45-4dee-9891-d0bd2f0e95b9" containerName="mariadb-database-create" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411844 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d4a04c-eaf3-4e09-912e-ca7b25918f30" containerName="init" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411873 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="37db56ec-494a-417b-9435-a06c024bb779" containerName="mariadb-account-create-update" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411886 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8912c649-5790-40b5-9fae-415ca9dbdc49" containerName="mariadb-database-create" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411901 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f46bec-6bde-45cd-ad44-fb2399387ad7" containerName="mariadb-account-create-update" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.411909 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="7929350b-785c-4baa-b2e6-738687b211a8" containerName="init" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.413209 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qq4dz" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.423775 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qq4dz"] Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.489719 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-klhpg"] Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.501131 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-klhpg"] Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.567831 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-df63-account-create-update-k6xrt"] Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.569738 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df63-account-create-update-k6xrt" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.572659 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.601410 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwkf\" (UniqueName: \"kubernetes.io/projected/6dba93f2-5c88-4288-938c-42b786852bbf-kube-api-access-sdwkf\") pod \"glance-db-create-qq4dz\" (UID: \"6dba93f2-5c88-4288-938c-42b786852bbf\") " pod="openstack/glance-db-create-qq4dz" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.601919 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dba93f2-5c88-4288-938c-42b786852bbf-operator-scripts\") pod \"glance-db-create-qq4dz\" (UID: \"6dba93f2-5c88-4288-938c-42b786852bbf\") " pod="openstack/glance-db-create-qq4dz" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.605685 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-df63-account-create-update-k6xrt"] Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.709117 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dba93f2-5c88-4288-938c-42b786852bbf-operator-scripts\") pod \"glance-db-create-qq4dz\" (UID: \"6dba93f2-5c88-4288-938c-42b786852bbf\") " pod="openstack/glance-db-create-qq4dz" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.709246 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ccac8b-df21-432b-8026-dbdd520d088c-operator-scripts\") pod \"glance-df63-account-create-update-k6xrt\" (UID: \"04ccac8b-df21-432b-8026-dbdd520d088c\") " pod="openstack/glance-df63-account-create-update-k6xrt" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.709315 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdwkf\" (UniqueName: \"kubernetes.io/projected/6dba93f2-5c88-4288-938c-42b786852bbf-kube-api-access-sdwkf\") pod \"glance-db-create-qq4dz\" (UID: \"6dba93f2-5c88-4288-938c-42b786852bbf\") " pod="openstack/glance-db-create-qq4dz" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.709774 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dba93f2-5c88-4288-938c-42b786852bbf-operator-scripts\") pod \"glance-db-create-qq4dz\" (UID: \"6dba93f2-5c88-4288-938c-42b786852bbf\") " pod="openstack/glance-db-create-qq4dz" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.711119 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b4nj\" (UniqueName: \"kubernetes.io/projected/04ccac8b-df21-432b-8026-dbdd520d088c-kube-api-access-6b4nj\") pod \"glance-df63-account-create-update-k6xrt\" (UID: \"04ccac8b-df21-432b-8026-dbdd520d088c\") " pod="openstack/glance-df63-account-create-update-k6xrt" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.732513 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdwkf\" (UniqueName: \"kubernetes.io/projected/6dba93f2-5c88-4288-938c-42b786852bbf-kube-api-access-sdwkf\") pod \"glance-db-create-qq4dz\" (UID: \"6dba93f2-5c88-4288-938c-42b786852bbf\") " pod="openstack/glance-db-create-qq4dz" Jan 27 22:09:51 crc kubenswrapper[4803]: E0127 22:09:51.762817 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96cdfdbb_1c49_46a6_b901_147ad561f0e6.slice/crio-332032bd16283da917921625227e9d0ed3485ca7145ba1a5dd0cf2995db3ced4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96cdfdbb_1c49_46a6_b901_147ad561f0e6.slice/crio-conmon-332032bd16283da917921625227e9d0ed3485ca7145ba1a5dd0cf2995db3ced4.scope\": RecentStats: unable to find data in memory cache]" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.789733 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qq4dz" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.813403 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ccac8b-df21-432b-8026-dbdd520d088c-operator-scripts\") pod \"glance-df63-account-create-update-k6xrt\" (UID: \"04ccac8b-df21-432b-8026-dbdd520d088c\") " pod="openstack/glance-df63-account-create-update-k6xrt" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.813528 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b4nj\" (UniqueName: \"kubernetes.io/projected/04ccac8b-df21-432b-8026-dbdd520d088c-kube-api-access-6b4nj\") pod \"glance-df63-account-create-update-k6xrt\" (UID: \"04ccac8b-df21-432b-8026-dbdd520d088c\") " pod="openstack/glance-df63-account-create-update-k6xrt" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.814142 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ccac8b-df21-432b-8026-dbdd520d088c-operator-scripts\") pod \"glance-df63-account-create-update-k6xrt\" (UID: \"04ccac8b-df21-432b-8026-dbdd520d088c\") " pod="openstack/glance-df63-account-create-update-k6xrt" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.828510 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b4nj\" (UniqueName: \"kubernetes.io/projected/04ccac8b-df21-432b-8026-dbdd520d088c-kube-api-access-6b4nj\") pod \"glance-df63-account-create-update-k6xrt\" (UID: \"04ccac8b-df21-432b-8026-dbdd520d088c\") " pod="openstack/glance-df63-account-create-update-k6xrt" Jan 27 22:09:51 crc kubenswrapper[4803]: I0127 22:09:51.923317 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df63-account-create-update-k6xrt" Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.265691 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qq4dz"] Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.274577 4803 generic.go:334] "Generic (PLEG): container finished" podID="96cdfdbb-1c49-46a6-b901-147ad561f0e6" containerID="332032bd16283da917921625227e9d0ed3485ca7145ba1a5dd0cf2995db3ced4" exitCode=0 Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.274637 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" event={"ID":"96cdfdbb-1c49-46a6-b901-147ad561f0e6","Type":"ContainerDied","Data":"332032bd16283da917921625227e9d0ed3485ca7145ba1a5dd0cf2995db3ced4"} Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.277801 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" event={"ID":"4fafdbaa-01ec-42c3-afd2-5416c549677f","Type":"ContainerStarted","Data":"609f34d79c9a61cf6ef2b7e79f12b036b9a6e2413a7c5943daf48b422ddb8ce8"} Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.331514 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11d4a04c-eaf3-4e09-912e-ca7b25918f30" path="/var/lib/kubelet/pods/11d4a04c-eaf3-4e09-912e-ca7b25918f30/volumes" Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.332124 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7929350b-785c-4baa-b2e6-738687b211a8" path="/var/lib/kubelet/pods/7929350b-785c-4baa-b2e6-738687b211a8/volumes" Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.907989 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" podStartSLOduration=4.907971108 podStartE2EDuration="4.907971108s" podCreationTimestamp="2026-01-27 22:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:09:52.316789467 +0000 UTC m=+1344.732811166" watchObservedRunningTime="2026-01-27 22:09:52.907971108 +0000 UTC m=+1345.323992807" Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.916707 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sk5jc"] Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.921651 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.924609 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sk5jc" Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.926692 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sk5jc"] Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.927164 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.950715 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx95v\" (UniqueName: \"kubernetes.io/projected/cc3416a2-788e-417e-9f0e-07f4d5b3c180-kube-api-access-bx95v\") pod \"cc3416a2-788e-417e-9f0e-07f4d5b3c180\" (UID: \"cc3416a2-788e-417e-9f0e-07f4d5b3c180\") " Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.951097 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc3416a2-788e-417e-9f0e-07f4d5b3c180-operator-scripts\") pod \"cc3416a2-788e-417e-9f0e-07f4d5b3c180\" (UID: \"cc3416a2-788e-417e-9f0e-07f4d5b3c180\") " Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.951911 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc3416a2-788e-417e-9f0e-07f4d5b3c180-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc3416a2-788e-417e-9f0e-07f4d5b3c180" (UID: "cc3416a2-788e-417e-9f0e-07f4d5b3c180"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.952275 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85dd1a77-7c09-44a3-bc49-0f19dff3c948-operator-scripts\") pod \"root-account-create-update-sk5jc\" (UID: \"85dd1a77-7c09-44a3-bc49-0f19dff3c948\") " pod="openstack/root-account-create-update-sk5jc" Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.952310 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg257\" (UniqueName: \"kubernetes.io/projected/85dd1a77-7c09-44a3-bc49-0f19dff3c948-kube-api-access-wg257\") pod \"root-account-create-update-sk5jc\" (UID: \"85dd1a77-7c09-44a3-bc49-0f19dff3c948\") " pod="openstack/root-account-create-update-sk5jc" Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.952455 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc3416a2-788e-417e-9f0e-07f4d5b3c180-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:52 crc kubenswrapper[4803]: I0127 22:09:52.957410 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc3416a2-788e-417e-9f0e-07f4d5b3c180-kube-api-access-bx95v" (OuterVolumeSpecName: "kube-api-access-bx95v") pod "cc3416a2-788e-417e-9f0e-07f4d5b3c180" (UID: "cc3416a2-788e-417e-9f0e-07f4d5b3c180"). InnerVolumeSpecName "kube-api-access-bx95v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.053650 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-df63-account-create-update-k6xrt"] Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.055963 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85dd1a77-7c09-44a3-bc49-0f19dff3c948-operator-scripts\") pod \"root-account-create-update-sk5jc\" (UID: \"85dd1a77-7c09-44a3-bc49-0f19dff3c948\") " pod="openstack/root-account-create-update-sk5jc" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.056101 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg257\" (UniqueName: \"kubernetes.io/projected/85dd1a77-7c09-44a3-bc49-0f19dff3c948-kube-api-access-wg257\") pod \"root-account-create-update-sk5jc\" (UID: \"85dd1a77-7c09-44a3-bc49-0f19dff3c948\") " pod="openstack/root-account-create-update-sk5jc" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.055979 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85dd1a77-7c09-44a3-bc49-0f19dff3c948-operator-scripts\") pod \"root-account-create-update-sk5jc\" (UID: \"85dd1a77-7c09-44a3-bc49-0f19dff3c948\") " pod="openstack/root-account-create-update-sk5jc" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.056729 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx95v\" (UniqueName: \"kubernetes.io/projected/cc3416a2-788e-417e-9f0e-07f4d5b3c180-kube-api-access-bx95v\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.106481 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg257\" (UniqueName: \"kubernetes.io/projected/85dd1a77-7c09-44a3-bc49-0f19dff3c948-kube-api-access-wg257\") pod \"root-account-create-update-sk5jc\" (UID: \"85dd1a77-7c09-44a3-bc49-0f19dff3c948\") " pod="openstack/root-account-create-update-sk5jc" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.257016 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sk5jc" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.272050 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-96md4"] Jan 27 22:09:53 crc kubenswrapper[4803]: E0127 22:09:53.275337 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3416a2-788e-417e-9f0e-07f4d5b3c180" containerName="mariadb-database-create" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.275363 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3416a2-788e-417e-9f0e-07f4d5b3c180" containerName="mariadb-database-create" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.275566 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc3416a2-788e-417e-9f0e-07f4d5b3c180" containerName="mariadb-database-create" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.276518 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.280381 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.280621 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.281518 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.295303 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-96md4"] Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.301356 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.305285 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fd2t7" event={"ID":"cc3416a2-788e-417e-9f0e-07f4d5b3c180","Type":"ContainerDied","Data":"4c4d3d961453dbbb2aeeb3d1230ffb07b63396c7c234ce294366149df1680c9f"} Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.305338 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c4d3d961453dbbb2aeeb3d1230ffb07b63396c7c234ce294366149df1680c9f" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.321248 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-df63-account-create-update-k6xrt" event={"ID":"04ccac8b-df21-432b-8026-dbdd520d088c","Type":"ContainerStarted","Data":"19eb8b13e1a481392fefa1de99b579fa7734925172683d99aca26a06d27a1168"} Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.324664 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qq4dz" event={"ID":"6dba93f2-5c88-4288-938c-42b786852bbf","Type":"ContainerStarted","Data":"212a6f9d209c2eaf09e83d45bcd7651f3d8e1a78896e0b622f222337d94e0f8c"} Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.324688 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qq4dz" event={"ID":"6dba93f2-5c88-4288-938c-42b786852bbf","Type":"ContainerStarted","Data":"c8b0f96f2e0b783cec22c6bb8574f4cdff4a2ce43ca6face41c432f13ceee0d8"} Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.328504 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5a85bcae-8159-430e-bf60-b94ca19c4131","Type":"ContainerStarted","Data":"8166387863da6abd526bf694d039fd9b7c5d90e690645ed4e35aeab1911da398"} Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.328927 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.362220 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-etc-swift\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.362299 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-combined-ca-bundle\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.362389 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-scripts\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.362415 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-dispersionconf\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.362445 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxzr\" (UniqueName: \"kubernetes.io/projected/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-kube-api-access-mwxzr\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.362524 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.362586 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-ring-data-devices\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.362620 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-swiftconf\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: E0127 22:09:53.364253 4803 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 22:09:53 crc kubenswrapper[4803]: E0127 22:09:53.364280 4803 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 22:09:53 crc kubenswrapper[4803]: E0127 22:09:53.364324 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift podName:72f06f5c-7c0f-4969-89a2-b16210f935c4 nodeName:}" failed. No retries permitted until 2026-01-27 22:09:57.364308725 +0000 UTC m=+1349.780330514 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift") pod "swift-storage-0" (UID: "72f06f5c-7c0f-4969-89a2-b16210f935c4") : configmap "swift-ring-files" not found Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.469301 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-combined-ca-bundle\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.469704 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-scripts\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.469730 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-dispersionconf\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.469748 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxzr\" (UniqueName: \"kubernetes.io/projected/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-kube-api-access-mwxzr\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.469825 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-ring-data-devices\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.469864 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-swiftconf\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.469955 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-etc-swift\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.470390 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-etc-swift\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.471781 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-ring-data-devices\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.472262 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-scripts\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.476750 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-combined-ca-bundle\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.478808 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-swiftconf\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.487199 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-dispersionconf\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.492835 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxzr\" (UniqueName: \"kubernetes.io/projected/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-kube-api-access-mwxzr\") pod \"swift-ring-rebalance-96md4\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.732359 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.786128 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.877776 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96cdfdbb-1c49-46a6-b901-147ad561f0e6-operator-scripts\") pod \"96cdfdbb-1c49-46a6-b901-147ad561f0e6\" (UID: \"96cdfdbb-1c49-46a6-b901-147ad561f0e6\") " Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.878070 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78mxt\" (UniqueName: \"kubernetes.io/projected/96cdfdbb-1c49-46a6-b901-147ad561f0e6-kube-api-access-78mxt\") pod \"96cdfdbb-1c49-46a6-b901-147ad561f0e6\" (UID: \"96cdfdbb-1c49-46a6-b901-147ad561f0e6\") " Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.878618 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96cdfdbb-1c49-46a6-b901-147ad561f0e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96cdfdbb-1c49-46a6-b901-147ad561f0e6" (UID: "96cdfdbb-1c49-46a6-b901-147ad561f0e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:53 crc kubenswrapper[4803]: I0127 22:09:53.884383 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96cdfdbb-1c49-46a6-b901-147ad561f0e6-kube-api-access-78mxt" (OuterVolumeSpecName: "kube-api-access-78mxt") pod "96cdfdbb-1c49-46a6-b901-147ad561f0e6" (UID: "96cdfdbb-1c49-46a6-b901-147ad561f0e6"). InnerVolumeSpecName "kube-api-access-78mxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:53.929127 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sk5jc"] Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:53.979798 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78mxt\" (UniqueName: \"kubernetes.io/projected/96cdfdbb-1c49-46a6-b901-147ad561f0e6-kube-api-access-78mxt\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:53.979821 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96cdfdbb-1c49-46a6-b901-147ad561f0e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.390087 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sk5jc" event={"ID":"85dd1a77-7c09-44a3-bc49-0f19dff3c948","Type":"ContainerStarted","Data":"114ca0918f39cb176817e5b6eef554e2a7b7fa226bf66ee4163d2dfa5514fd40"} Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.390122 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sk5jc" event={"ID":"85dd1a77-7c09-44a3-bc49-0f19dff3c948","Type":"ContainerStarted","Data":"4e32e27e678eddb731e9448204e6d8f343e99091c09f78449c9a0a2b37f9c940"} Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.401385 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" event={"ID":"96cdfdbb-1c49-46a6-b901-147ad561f0e6","Type":"ContainerDied","Data":"c1ae611aeac752d10f0df6ddf701579392f13f08ccb7316a0b66745b7107868c"} Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.401644 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1ae611aeac752d10f0df6ddf701579392f13f08ccb7316a0b66745b7107868c" Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.401486 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c39a-account-create-update-k2wlf" Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.431153 4803 generic.go:334] "Generic (PLEG): container finished" podID="04ccac8b-df21-432b-8026-dbdd520d088c" containerID="3fad64164673658069f6e82800df33ee3f7cc8e466159aa90debab92e4f39637" exitCode=0 Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.431212 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-df63-account-create-update-k6xrt" event={"ID":"04ccac8b-df21-432b-8026-dbdd520d088c","Type":"ContainerDied","Data":"3fad64164673658069f6e82800df33ee3f7cc8e466159aa90debab92e4f39637"} Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.464017 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-sk5jc" podStartSLOduration=2.463994578 podStartE2EDuration="2.463994578s" podCreationTimestamp="2026-01-27 22:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:09:54.417645509 +0000 UTC m=+1346.833667218" watchObservedRunningTime="2026-01-27 22:09:54.463994578 +0000 UTC m=+1346.880016277" Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.464097 4803 generic.go:334] "Generic (PLEG): container finished" podID="6dba93f2-5c88-4288-938c-42b786852bbf" containerID="212a6f9d209c2eaf09e83d45bcd7651f3d8e1a78896e0b622f222337d94e0f8c" exitCode=0 Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.464168 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qq4dz" event={"ID":"6dba93f2-5c88-4288-938c-42b786852bbf","Type":"ContainerDied","Data":"212a6f9d209c2eaf09e83d45bcd7651f3d8e1a78896e0b622f222337d94e0f8c"} Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.524759 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5a85bcae-8159-430e-bf60-b94ca19c4131","Type":"ContainerStarted","Data":"105e55ea5819fce4dc4525c7899a065b2988b7ad7826bc4a6f9b4d8575909309"} Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.524813 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:54.817334 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.239303847 podStartE2EDuration="6.817315789s" podCreationTimestamp="2026-01-27 22:09:48 +0000 UTC" firstStartedPulling="2026-01-27 22:09:51.05292228 +0000 UTC m=+1343.468943979" lastFinishedPulling="2026-01-27 22:09:52.630934222 +0000 UTC m=+1345.046955921" observedRunningTime="2026-01-27 22:09:54.562643757 +0000 UTC m=+1346.978665456" watchObservedRunningTime="2026-01-27 22:09:54.817315789 +0000 UTC m=+1347.233337488" Jan 27 22:09:55 crc kubenswrapper[4803]: I0127 22:09:55.994881 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-96md4"] Jan 27 22:09:56 crc kubenswrapper[4803]: I0127 22:09:56.543386 4803 generic.go:334] "Generic (PLEG): container finished" podID="85dd1a77-7c09-44a3-bc49-0f19dff3c948" containerID="114ca0918f39cb176817e5b6eef554e2a7b7fa226bf66ee4163d2dfa5514fd40" exitCode=0 Jan 27 22:09:56 crc kubenswrapper[4803]: I0127 22:09:56.543505 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sk5jc" event={"ID":"85dd1a77-7c09-44a3-bc49-0f19dff3c948","Type":"ContainerDied","Data":"114ca0918f39cb176817e5b6eef554e2a7b7fa226bf66ee4163d2dfa5514fd40"} Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.385901 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:09:57 crc kubenswrapper[4803]: E0127 22:09:57.386229 4803 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 22:09:57 crc kubenswrapper[4803]: E0127 22:09:57.386250 4803 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 22:09:57 crc kubenswrapper[4803]: E0127 22:09:57.386306 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift podName:72f06f5c-7c0f-4969-89a2-b16210f935c4 nodeName:}" failed. No retries permitted until 2026-01-27 22:10:05.386280535 +0000 UTC m=+1357.802302234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift") pod "swift-storage-0" (UID: "72f06f5c-7c0f-4969-89a2-b16210f935c4") : configmap "swift-ring-files" not found Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.531804 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qq4dz" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.556473 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-96md4" event={"ID":"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00","Type":"ContainerStarted","Data":"aefcf8ef30a5775942d112ce9ca06dbf9a74d517d6ad021289f7bb77a14bfc2b"} Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.558190 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-df63-account-create-update-k6xrt" event={"ID":"04ccac8b-df21-432b-8026-dbdd520d088c","Type":"ContainerDied","Data":"19eb8b13e1a481392fefa1de99b579fa7734925172683d99aca26a06d27a1168"} Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.558210 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19eb8b13e1a481392fefa1de99b579fa7734925172683d99aca26a06d27a1168" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.568581 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qq4dz" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.568597 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qq4dz" event={"ID":"6dba93f2-5c88-4288-938c-42b786852bbf","Type":"ContainerDied","Data":"c8b0f96f2e0b783cec22c6bb8574f4cdff4a2ce43ca6face41c432f13ceee0d8"} Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.568632 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b0f96f2e0b783cec22c6bb8574f4cdff4a2ce43ca6face41c432f13ceee0d8" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.575698 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df63-account-create-update-k6xrt" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.590758 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dba93f2-5c88-4288-938c-42b786852bbf-operator-scripts\") pod \"6dba93f2-5c88-4288-938c-42b786852bbf\" (UID: \"6dba93f2-5c88-4288-938c-42b786852bbf\") " Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.590816 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdwkf\" (UniqueName: \"kubernetes.io/projected/6dba93f2-5c88-4288-938c-42b786852bbf-kube-api-access-sdwkf\") pod \"6dba93f2-5c88-4288-938c-42b786852bbf\" (UID: \"6dba93f2-5c88-4288-938c-42b786852bbf\") " Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.591557 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dba93f2-5c88-4288-938c-42b786852bbf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6dba93f2-5c88-4288-938c-42b786852bbf" (UID: "6dba93f2-5c88-4288-938c-42b786852bbf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.599136 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dba93f2-5c88-4288-938c-42b786852bbf-kube-api-access-sdwkf" (OuterVolumeSpecName: "kube-api-access-sdwkf") pod "6dba93f2-5c88-4288-938c-42b786852bbf" (UID: "6dba93f2-5c88-4288-938c-42b786852bbf"). InnerVolumeSpecName "kube-api-access-sdwkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.694696 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b4nj\" (UniqueName: \"kubernetes.io/projected/04ccac8b-df21-432b-8026-dbdd520d088c-kube-api-access-6b4nj\") pod \"04ccac8b-df21-432b-8026-dbdd520d088c\" (UID: \"04ccac8b-df21-432b-8026-dbdd520d088c\") " Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.694979 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ccac8b-df21-432b-8026-dbdd520d088c-operator-scripts\") pod \"04ccac8b-df21-432b-8026-dbdd520d088c\" (UID: \"04ccac8b-df21-432b-8026-dbdd520d088c\") " Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.695367 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ccac8b-df21-432b-8026-dbdd520d088c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04ccac8b-df21-432b-8026-dbdd520d088c" (UID: "04ccac8b-df21-432b-8026-dbdd520d088c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.695619 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ccac8b-df21-432b-8026-dbdd520d088c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.695633 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dba93f2-5c88-4288-938c-42b786852bbf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.695643 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdwkf\" (UniqueName: \"kubernetes.io/projected/6dba93f2-5c88-4288-938c-42b786852bbf-kube-api-access-sdwkf\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.699155 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ccac8b-df21-432b-8026-dbdd520d088c-kube-api-access-6b4nj" (OuterVolumeSpecName: "kube-api-access-6b4nj") pod "04ccac8b-df21-432b-8026-dbdd520d088c" (UID: "04ccac8b-df21-432b-8026-dbdd520d088c"). InnerVolumeSpecName "kube-api-access-6b4nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.797862 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b4nj\" (UniqueName: \"kubernetes.io/projected/04ccac8b-df21-432b-8026-dbdd520d088c-kube-api-access-6b4nj\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:57 crc kubenswrapper[4803]: I0127 22:09:57.933072 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sk5jc" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.002638 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg257\" (UniqueName: \"kubernetes.io/projected/85dd1a77-7c09-44a3-bc49-0f19dff3c948-kube-api-access-wg257\") pod \"85dd1a77-7c09-44a3-bc49-0f19dff3c948\" (UID: \"85dd1a77-7c09-44a3-bc49-0f19dff3c948\") " Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.002742 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85dd1a77-7c09-44a3-bc49-0f19dff3c948-operator-scripts\") pod \"85dd1a77-7c09-44a3-bc49-0f19dff3c948\" (UID: \"85dd1a77-7c09-44a3-bc49-0f19dff3c948\") " Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.003671 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85dd1a77-7c09-44a3-bc49-0f19dff3c948-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85dd1a77-7c09-44a3-bc49-0f19dff3c948" (UID: "85dd1a77-7c09-44a3-bc49-0f19dff3c948"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.006664 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85dd1a77-7c09-44a3-bc49-0f19dff3c948-kube-api-access-wg257" (OuterVolumeSpecName: "kube-api-access-wg257") pod "85dd1a77-7c09-44a3-bc49-0f19dff3c948" (UID: "85dd1a77-7c09-44a3-bc49-0f19dff3c948"). InnerVolumeSpecName "kube-api-access-wg257". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.105553 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85dd1a77-7c09-44a3-bc49-0f19dff3c948-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.105591 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg257\" (UniqueName: \"kubernetes.io/projected/85dd1a77-7c09-44a3-bc49-0f19dff3c948-kube-api-access-wg257\") on node \"crc\" DevicePath \"\"" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.581524 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sk5jc" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.581526 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sk5jc" event={"ID":"85dd1a77-7c09-44a3-bc49-0f19dff3c948","Type":"ContainerDied","Data":"4e32e27e678eddb731e9448204e6d8f343e99091c09f78449c9a0a2b37f9c940"} Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.582107 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e32e27e678eddb731e9448204e6d8f343e99091c09f78449c9a0a2b37f9c940" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.593556 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df63-account-create-update-k6xrt" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.593900 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"006465d9-12d6-4d2e-a02e-8a2669bdcbef","Type":"ContainerStarted","Data":"1d16d2ec6950ee680547e996772b5f061effba959c9b6af212e7a394bfc5dc9f"} Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.671770 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p"] Jan 27 22:09:58 crc kubenswrapper[4803]: E0127 22:09:58.672166 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ccac8b-df21-432b-8026-dbdd520d088c" containerName="mariadb-account-create-update" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.672184 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ccac8b-df21-432b-8026-dbdd520d088c" containerName="mariadb-account-create-update" Jan 27 22:09:58 crc kubenswrapper[4803]: E0127 22:09:58.672200 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cdfdbb-1c49-46a6-b901-147ad561f0e6" containerName="mariadb-account-create-update" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.672206 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cdfdbb-1c49-46a6-b901-147ad561f0e6" containerName="mariadb-account-create-update" Jan 27 22:09:58 crc kubenswrapper[4803]: E0127 22:09:58.672228 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dba93f2-5c88-4288-938c-42b786852bbf" containerName="mariadb-database-create" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.672234 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dba93f2-5c88-4288-938c-42b786852bbf" containerName="mariadb-database-create" Jan 27 22:09:58 crc kubenswrapper[4803]: E0127 22:09:58.672244 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85dd1a77-7c09-44a3-bc49-0f19dff3c948" containerName="mariadb-account-create-update" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.672250 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="85dd1a77-7c09-44a3-bc49-0f19dff3c948" containerName="mariadb-account-create-update" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.672415 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="85dd1a77-7c09-44a3-bc49-0f19dff3c948" containerName="mariadb-account-create-update" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.672439 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dba93f2-5c88-4288-938c-42b786852bbf" containerName="mariadb-database-create" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.672451 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ccac8b-df21-432b-8026-dbdd520d088c" containerName="mariadb-account-create-update" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.672461 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="96cdfdbb-1c49-46a6-b901-147ad561f0e6" containerName="mariadb-account-create-update" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.673111 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.680469 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p"] Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.717918 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdnn\" (UniqueName: \"kubernetes.io/projected/cf306541-6ada-4bf3-8a32-a1de57044cf8-kube-api-access-7zdnn\") pod \"mysqld-exporter-openstack-cell1-db-create-xnn9p\" (UID: \"cf306541-6ada-4bf3-8a32-a1de57044cf8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.718148 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf306541-6ada-4bf3-8a32-a1de57044cf8-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-xnn9p\" (UID: \"cf306541-6ada-4bf3-8a32-a1de57044cf8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.820115 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdnn\" (UniqueName: \"kubernetes.io/projected/cf306541-6ada-4bf3-8a32-a1de57044cf8-kube-api-access-7zdnn\") pod \"mysqld-exporter-openstack-cell1-db-create-xnn9p\" (UID: \"cf306541-6ada-4bf3-8a32-a1de57044cf8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.820214 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf306541-6ada-4bf3-8a32-a1de57044cf8-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-xnn9p\" (UID: \"cf306541-6ada-4bf3-8a32-a1de57044cf8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.821155 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf306541-6ada-4bf3-8a32-a1de57044cf8-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-xnn9p\" (UID: \"cf306541-6ada-4bf3-8a32-a1de57044cf8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.853983 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdnn\" (UniqueName: \"kubernetes.io/projected/cf306541-6ada-4bf3-8a32-a1de57044cf8-kube-api-access-7zdnn\") pod \"mysqld-exporter-openstack-cell1-db-create-xnn9p\" (UID: \"cf306541-6ada-4bf3-8a32-a1de57044cf8\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.887056 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.908686 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-e69e-account-create-update-qnj69"] Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.910476 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.915838 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.939048 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-e69e-account-create-update-qnj69"] Jan 27 22:09:58 crc kubenswrapper[4803]: I0127 22:09:58.989566 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.024300 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q2v4v"] Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.024513 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" podUID="d2331ee6-b42a-43ef-b314-ab0084130872" containerName="dnsmasq-dns" containerID="cri-o://44eebac9d3582f51e08e051e75ff94b98d8bc1c5d73dc3b58bdef79de72df67e" gracePeriod=10 Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.025485 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrsn\" (UniqueName: \"kubernetes.io/projected/bbc3413d-60d0-477c-a252-98ac28898260-kube-api-access-dzrsn\") pod \"mysqld-exporter-e69e-account-create-update-qnj69\" (UID: \"bbc3413d-60d0-477c-a252-98ac28898260\") " pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.025538 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc3413d-60d0-477c-a252-98ac28898260-operator-scripts\") pod \"mysqld-exporter-e69e-account-create-update-qnj69\" (UID: \"bbc3413d-60d0-477c-a252-98ac28898260\") " pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.127687 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzrsn\" (UniqueName: \"kubernetes.io/projected/bbc3413d-60d0-477c-a252-98ac28898260-kube-api-access-dzrsn\") pod \"mysqld-exporter-e69e-account-create-update-qnj69\" (UID: \"bbc3413d-60d0-477c-a252-98ac28898260\") " pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.128099 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc3413d-60d0-477c-a252-98ac28898260-operator-scripts\") pod \"mysqld-exporter-e69e-account-create-update-qnj69\" (UID: \"bbc3413d-60d0-477c-a252-98ac28898260\") " pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.128887 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc3413d-60d0-477c-a252-98ac28898260-operator-scripts\") pod \"mysqld-exporter-e69e-account-create-update-qnj69\" (UID: \"bbc3413d-60d0-477c-a252-98ac28898260\") " pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.147595 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzrsn\" (UniqueName: \"kubernetes.io/projected/bbc3413d-60d0-477c-a252-98ac28898260-kube-api-access-dzrsn\") pod \"mysqld-exporter-e69e-account-create-update-qnj69\" (UID: \"bbc3413d-60d0-477c-a252-98ac28898260\") " pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.241093 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.493052 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sk5jc"] Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.500140 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sk5jc"] Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.604646 4803 generic.go:334] "Generic (PLEG): container finished" podID="d2331ee6-b42a-43ef-b314-ab0084130872" containerID="44eebac9d3582f51e08e051e75ff94b98d8bc1c5d73dc3b58bdef79de72df67e" exitCode=0 Jan 27 22:09:59 crc kubenswrapper[4803]: I0127 22:09:59.604684 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" event={"ID":"d2331ee6-b42a-43ef-b314-ab0084130872","Type":"ContainerDied","Data":"44eebac9d3582f51e08e051e75ff94b98d8bc1c5d73dc3b58bdef79de72df67e"} Jan 27 22:10:00 crc kubenswrapper[4803]: I0127 22:10:00.324380 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85dd1a77-7c09-44a3-bc49-0f19dff3c948" path="/var/lib/kubelet/pods/85dd1a77-7c09-44a3-bc49-0f19dff3c948/volumes" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.208631 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.278799 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-dns-svc\") pod \"d2331ee6-b42a-43ef-b314-ab0084130872\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.279142 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jl7b\" (UniqueName: \"kubernetes.io/projected/d2331ee6-b42a-43ef-b314-ab0084130872-kube-api-access-2jl7b\") pod \"d2331ee6-b42a-43ef-b314-ab0084130872\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.279247 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-config\") pod \"d2331ee6-b42a-43ef-b314-ab0084130872\" (UID: \"d2331ee6-b42a-43ef-b314-ab0084130872\") " Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.300150 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2331ee6-b42a-43ef-b314-ab0084130872-kube-api-access-2jl7b" (OuterVolumeSpecName: "kube-api-access-2jl7b") pod "d2331ee6-b42a-43ef-b314-ab0084130872" (UID: "d2331ee6-b42a-43ef-b314-ab0084130872"). InnerVolumeSpecName "kube-api-access-2jl7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.355180 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-config" (OuterVolumeSpecName: "config") pod "d2331ee6-b42a-43ef-b314-ab0084130872" (UID: "d2331ee6-b42a-43ef-b314-ab0084130872"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.357505 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2331ee6-b42a-43ef-b314-ab0084130872" (UID: "d2331ee6-b42a-43ef-b314-ab0084130872"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.399984 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.400319 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jl7b\" (UniqueName: \"kubernetes.io/projected/d2331ee6-b42a-43ef-b314-ab0084130872-kube-api-access-2jl7b\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.400331 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2331ee6-b42a-43ef-b314-ab0084130872-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.588859 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-e69e-account-create-update-qnj69"] Jan 27 22:10:01 crc kubenswrapper[4803]: W0127 22:10:01.626396 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc3413d_60d0_477c_a252_98ac28898260.slice/crio-1f37f615715c4bd7e8ba60ff5f96e189d4c501faac0f6fefbfc824d6cc0cb98c WatchSource:0}: Error finding container 1f37f615715c4bd7e8ba60ff5f96e189d4c501faac0f6fefbfc824d6cc0cb98c: Status 404 returned error can't find the container with id 1f37f615715c4bd7e8ba60ff5f96e189d4c501faac0f6fefbfc824d6cc0cb98c Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.626967 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" event={"ID":"d2331ee6-b42a-43ef-b314-ab0084130872","Type":"ContainerDied","Data":"517183d08ed6a7369cdbbb32d2fe615678d483897588d6f15c7a3a6b86d481a7"} Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.627016 4803 scope.go:117] "RemoveContainer" containerID="44eebac9d3582f51e08e051e75ff94b98d8bc1c5d73dc3b58bdef79de72df67e" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.627048 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q2v4v" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.630000 4803 generic.go:334] "Generic (PLEG): container finished" podID="73021b6c-3762-44f7-af8d-efd3ff4e4b7b" containerID="0b2c830dc721a2edad3fd418354a9a2e73aa5da7b6de027ce46a3e2b2064fa6b" exitCode=0 Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.630029 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73021b6c-3762-44f7-af8d-efd3ff4e4b7b","Type":"ContainerDied","Data":"0b2c830dc721a2edad3fd418354a9a2e73aa5da7b6de027ce46a3e2b2064fa6b"} Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.634246 4803 generic.go:334] "Generic (PLEG): container finished" podID="50e2e860-a414-4c3e-888e-ac5873f13d2d" containerID="c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd" exitCode=0 Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.634336 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"50e2e860-a414-4c3e-888e-ac5873f13d2d","Type":"ContainerDied","Data":"c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd"} Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.636793 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"006465d9-12d6-4d2e-a02e-8a2669bdcbef","Type":"ContainerStarted","Data":"3ab14d866946a49b52097f3eab160d5549a9b1215efbfc0611543f594392f497"} Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.641272 4803 generic.go:334] "Generic (PLEG): container finished" podID="254b4a13-ff42-41cb-ae18-373ad9cfc583" containerID="ca8197e506a06cf62307479ac31e9ea0d6627d531e6aead1b3345820efde09db" exitCode=0 Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.641340 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"254b4a13-ff42-41cb-ae18-373ad9cfc583","Type":"ContainerDied","Data":"ca8197e506a06cf62307479ac31e9ea0d6627d531e6aead1b3345820efde09db"} Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.643208 4803 generic.go:334] "Generic (PLEG): container finished" podID="993ad889-77c3-480e-8b5b-985766d488be" containerID="c21b90b93949fe0dc88c565a42c81d7fafe84c23ccf407e2c619db232c66744d" exitCode=0 Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.643234 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"993ad889-77c3-480e-8b5b-985766d488be","Type":"ContainerDied","Data":"c21b90b93949fe0dc88c565a42c81d7fafe84c23ccf407e2c619db232c66744d"} Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.692538 4803 scope.go:117] "RemoveContainer" containerID="995b5e492294e6c71e0885c93f16289b7820bff6b9d1a06188083c3549b22660" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.763342 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p"] Jan 27 22:10:01 crc kubenswrapper[4803]: W0127 22:10:01.766203 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf306541_6ada_4bf3_8a32_a1de57044cf8.slice/crio-225b931ae520f2b34abcd3010c226141cfad561ce9e07bcafa82dba64f15dee6 WatchSource:0}: Error finding container 225b931ae520f2b34abcd3010c226141cfad561ce9e07bcafa82dba64f15dee6: Status 404 returned error can't find the container with id 225b931ae520f2b34abcd3010c226141cfad561ce9e07bcafa82dba64f15dee6 Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.775738 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q2v4v"] Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.793732 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q2v4v"] Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.803618 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-qptdp"] Jan 27 22:10:01 crc kubenswrapper[4803]: E0127 22:10:01.804067 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2331ee6-b42a-43ef-b314-ab0084130872" containerName="dnsmasq-dns" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.804080 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2331ee6-b42a-43ef-b314-ab0084130872" containerName="dnsmasq-dns" Jan 27 22:10:01 crc kubenswrapper[4803]: E0127 22:10:01.804118 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2331ee6-b42a-43ef-b314-ab0084130872" containerName="init" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.804125 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2331ee6-b42a-43ef-b314-ab0084130872" containerName="init" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.804333 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2331ee6-b42a-43ef-b314-ab0084130872" containerName="dnsmasq-dns" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.805075 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.806648 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.808012 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5kkmh" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.812394 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qptdp"] Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.918186 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-config-data\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.918260 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-combined-ca-bundle\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.918299 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w8qk\" (UniqueName: \"kubernetes.io/projected/a7065dfd-1cab-471d-9aa5-60cee3714a4e-kube-api-access-6w8qk\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:01 crc kubenswrapper[4803]: I0127 22:10:01.918323 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-db-sync-config-data\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.020099 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-config-data\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.020438 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-combined-ca-bundle\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.020466 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w8qk\" (UniqueName: \"kubernetes.io/projected/a7065dfd-1cab-471d-9aa5-60cee3714a4e-kube-api-access-6w8qk\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.020486 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-db-sync-config-data\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.026001 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-db-sync-config-data\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.026176 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-config-data\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.026244 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-combined-ca-bundle\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.040147 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w8qk\" (UniqueName: \"kubernetes.io/projected/a7065dfd-1cab-471d-9aa5-60cee3714a4e-kube-api-access-6w8qk\") pod \"glance-db-sync-qptdp\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.193442 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.336619 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2331ee6-b42a-43ef-b314-ab0084130872" path="/var/lib/kubelet/pods/d2331ee6-b42a-43ef-b314-ab0084130872/volumes" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.696474 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"50e2e860-a414-4c3e-888e-ac5873f13d2d","Type":"ContainerStarted","Data":"2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c"} Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.697464 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.707240 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-96md4" event={"ID":"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00","Type":"ContainerStarted","Data":"58f1bf37226e43a1d4fae1dbdc21f2a6c510ec060654d8306422fa0ef3062419"} Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.717664 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"254b4a13-ff42-41cb-ae18-373ad9cfc583","Type":"ContainerStarted","Data":"8ec67556886515f6008a9fa50849706d9289834f472c1d9b14eb5cee98a8b6cc"} Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.718617 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.723876 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" event={"ID":"cf306541-6ada-4bf3-8a32-a1de57044cf8","Type":"ContainerStarted","Data":"70241e71e7051c5b95d12de46ceb1ab3094a8d6e53f3379393fdc327fc312048"} Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.723914 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" event={"ID":"cf306541-6ada-4bf3-8a32-a1de57044cf8","Type":"ContainerStarted","Data":"225b931ae520f2b34abcd3010c226141cfad561ce9e07bcafa82dba64f15dee6"} Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.734279 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"993ad889-77c3-480e-8b5b-985766d488be","Type":"ContainerStarted","Data":"c8f3ab958869c4a1752ed84096c00c9007044ec28b1cea82af402fadc15df134"} Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.735598 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.740105 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73021b6c-3762-44f7-af8d-efd3ff4e4b7b","Type":"ContainerStarted","Data":"c5a3ecc082d0bd45b33fb4d378b55ad449b2c0268808df488b266d8add88c35e"} Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.740759 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.747339 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" event={"ID":"bbc3413d-60d0-477c-a252-98ac28898260","Type":"ContainerStarted","Data":"0d923d35d781b4ba319e48a3b2130673ba87c106f175c990a6da1a3757ca74c6"} Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.747403 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" event={"ID":"bbc3413d-60d0-477c-a252-98ac28898260","Type":"ContainerStarted","Data":"1f37f615715c4bd7e8ba60ff5f96e189d4c501faac0f6fefbfc824d6cc0cb98c"} Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.748334 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371975.106464 podStartE2EDuration="1m1.748311687s" podCreationTimestamp="2026-01-27 22:09:01 +0000 UTC" firstStartedPulling="2026-01-27 22:09:04.030985184 +0000 UTC m=+1296.447006883" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:02.735145712 +0000 UTC m=+1355.151167411" watchObservedRunningTime="2026-01-27 22:10:02.748311687 +0000 UTC m=+1355.164333386" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.756324 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-96md4" podStartSLOduration=5.899919763 podStartE2EDuration="9.756309122s" podCreationTimestamp="2026-01-27 22:09:53 +0000 UTC" firstStartedPulling="2026-01-27 22:09:57.402314307 +0000 UTC m=+1349.818336006" lastFinishedPulling="2026-01-27 22:10:01.258703666 +0000 UTC m=+1353.674725365" observedRunningTime="2026-01-27 22:10:02.754153824 +0000 UTC m=+1355.170175523" watchObservedRunningTime="2026-01-27 22:10:02.756309122 +0000 UTC m=+1355.172330821" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.797439 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.805524916 podStartE2EDuration="1m1.797408849s" podCreationTimestamp="2026-01-27 22:09:01 +0000 UTC" firstStartedPulling="2026-01-27 22:09:03.595695284 +0000 UTC m=+1296.011716983" lastFinishedPulling="2026-01-27 22:09:27.587579217 +0000 UTC m=+1320.003600916" observedRunningTime="2026-01-27 22:10:02.791381757 +0000 UTC m=+1355.207403456" watchObservedRunningTime="2026-01-27 22:10:02.797408849 +0000 UTC m=+1355.213430548" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.831186 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.132833467 podStartE2EDuration="1m1.831164169s" podCreationTimestamp="2026-01-27 22:09:01 +0000 UTC" firstStartedPulling="2026-01-27 22:09:03.879236964 +0000 UTC m=+1296.295258653" lastFinishedPulling="2026-01-27 22:09:27.577567656 +0000 UTC m=+1319.993589355" observedRunningTime="2026-01-27 22:10:02.816157585 +0000 UTC m=+1355.232179314" watchObservedRunningTime="2026-01-27 22:10:02.831164169 +0000 UTC m=+1355.247185868" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.847542 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" podStartSLOduration=4.84751663 podStartE2EDuration="4.84751663s" podCreationTimestamp="2026-01-27 22:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:02.833269386 +0000 UTC m=+1355.249291085" watchObservedRunningTime="2026-01-27 22:10:02.84751663 +0000 UTC m=+1355.263538339" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.866630 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.468581475 podStartE2EDuration="1m1.866608384s" podCreationTimestamp="2026-01-27 22:09:01 +0000 UTC" firstStartedPulling="2026-01-27 22:09:04.237980972 +0000 UTC m=+1296.654002671" lastFinishedPulling="2026-01-27 22:09:27.636007881 +0000 UTC m=+1320.052029580" observedRunningTime="2026-01-27 22:10:02.853740897 +0000 UTC m=+1355.269762616" watchObservedRunningTime="2026-01-27 22:10:02.866608384 +0000 UTC m=+1355.282630073" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.881236 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" podStartSLOduration=4.881221168 podStartE2EDuration="4.881221168s" podCreationTimestamp="2026-01-27 22:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:02.879249844 +0000 UTC m=+1355.295271543" watchObservedRunningTime="2026-01-27 22:10:02.881221168 +0000 UTC m=+1355.297242867" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.945431 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-klv9j"] Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.946771 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-klv9j" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.955146 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 22:10:02 crc kubenswrapper[4803]: I0127 22:10:02.963888 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-klv9j"] Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.027965 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qptdp"] Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.029740 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.065114 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4dd1516-79e6-4f6a-98a9-b672312cc668-operator-scripts\") pod \"root-account-create-update-klv9j\" (UID: \"d4dd1516-79e6-4f6a-98a9-b672312cc668\") " pod="openstack/root-account-create-update-klv9j" Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.065624 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phpcs\" (UniqueName: \"kubernetes.io/projected/d4dd1516-79e6-4f6a-98a9-b672312cc668-kube-api-access-phpcs\") pod \"root-account-create-update-klv9j\" (UID: \"d4dd1516-79e6-4f6a-98a9-b672312cc668\") " pod="openstack/root-account-create-update-klv9j" Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.167625 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4dd1516-79e6-4f6a-98a9-b672312cc668-operator-scripts\") pod \"root-account-create-update-klv9j\" (UID: \"d4dd1516-79e6-4f6a-98a9-b672312cc668\") " pod="openstack/root-account-create-update-klv9j" Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.168166 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phpcs\" (UniqueName: \"kubernetes.io/projected/d4dd1516-79e6-4f6a-98a9-b672312cc668-kube-api-access-phpcs\") pod \"root-account-create-update-klv9j\" (UID: \"d4dd1516-79e6-4f6a-98a9-b672312cc668\") " pod="openstack/root-account-create-update-klv9j" Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.168329 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4dd1516-79e6-4f6a-98a9-b672312cc668-operator-scripts\") pod \"root-account-create-update-klv9j\" (UID: \"d4dd1516-79e6-4f6a-98a9-b672312cc668\") " pod="openstack/root-account-create-update-klv9j" Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.195707 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phpcs\" (UniqueName: \"kubernetes.io/projected/d4dd1516-79e6-4f6a-98a9-b672312cc668-kube-api-access-phpcs\") pod \"root-account-create-update-klv9j\" (UID: \"d4dd1516-79e6-4f6a-98a9-b672312cc668\") " pod="openstack/root-account-create-update-klv9j" Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.267776 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-klv9j" Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.763763 4803 generic.go:334] "Generic (PLEG): container finished" podID="bbc3413d-60d0-477c-a252-98ac28898260" containerID="0d923d35d781b4ba319e48a3b2130673ba87c106f175c990a6da1a3757ca74c6" exitCode=0 Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.764509 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" event={"ID":"bbc3413d-60d0-477c-a252-98ac28898260","Type":"ContainerDied","Data":"0d923d35d781b4ba319e48a3b2130673ba87c106f175c990a6da1a3757ca74c6"} Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.773042 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qptdp" event={"ID":"a7065dfd-1cab-471d-9aa5-60cee3714a4e","Type":"ContainerStarted","Data":"66cda4f1f191be615f0ba89ef4e1bdf977d2d3fb0c5e1d5b85c112a9df27e538"} Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.775492 4803 generic.go:334] "Generic (PLEG): container finished" podID="cf306541-6ada-4bf3-8a32-a1de57044cf8" containerID="70241e71e7051c5b95d12de46ceb1ab3094a8d6e53f3379393fdc327fc312048" exitCode=0 Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.776734 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" event={"ID":"cf306541-6ada-4bf3-8a32-a1de57044cf8","Type":"ContainerDied","Data":"70241e71e7051c5b95d12de46ceb1ab3094a8d6e53f3379393fdc327fc312048"} Jan 27 22:10:03 crc kubenswrapper[4803]: I0127 22:10:03.865541 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-klv9j"] Jan 27 22:10:04 crc kubenswrapper[4803]: I0127 22:10:04.799041 4803 generic.go:334] "Generic (PLEG): container finished" podID="d4dd1516-79e6-4f6a-98a9-b672312cc668" containerID="0df0fa41de09f9d42ff3d143051bd60fcb1927165b4af21f9d910db4c20c28bd" exitCode=0 Jan 27 22:10:04 crc kubenswrapper[4803]: I0127 22:10:04.799412 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-klv9j" event={"ID":"d4dd1516-79e6-4f6a-98a9-b672312cc668","Type":"ContainerDied","Data":"0df0fa41de09f9d42ff3d143051bd60fcb1927165b4af21f9d910db4c20c28bd"} Jan 27 22:10:04 crc kubenswrapper[4803]: I0127 22:10:04.799440 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-klv9j" event={"ID":"d4dd1516-79e6-4f6a-98a9-b672312cc668","Type":"ContainerStarted","Data":"226924510eb59d5a69ba795ca3b429421ab0f32979081583556a4969d53fa67b"} Jan 27 22:10:04 crc kubenswrapper[4803]: I0127 22:10:04.893456 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-99c48dff5-sj7f4" podUID="e62b2a29-1e10-4064-93da-24b6d5e88397" containerName="console" containerID="cri-o://93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2" gracePeriod=15 Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.412391 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.421816 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:10:05 crc kubenswrapper[4803]: E0127 22:10:05.422084 4803 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 22:10:05 crc kubenswrapper[4803]: E0127 22:10:05.422108 4803 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 22:10:05 crc kubenswrapper[4803]: E0127 22:10:05.422164 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift podName:72f06f5c-7c0f-4969-89a2-b16210f935c4 nodeName:}" failed. No retries permitted until 2026-01-27 22:10:21.422148928 +0000 UTC m=+1373.838170627 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift") pod "swift-storage-0" (UID: "72f06f5c-7c0f-4969-89a2-b16210f935c4") : configmap "swift-ring-files" not found Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.523921 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzrsn\" (UniqueName: \"kubernetes.io/projected/bbc3413d-60d0-477c-a252-98ac28898260-kube-api-access-dzrsn\") pod \"bbc3413d-60d0-477c-a252-98ac28898260\" (UID: \"bbc3413d-60d0-477c-a252-98ac28898260\") " Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.524128 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc3413d-60d0-477c-a252-98ac28898260-operator-scripts\") pod \"bbc3413d-60d0-477c-a252-98ac28898260\" (UID: \"bbc3413d-60d0-477c-a252-98ac28898260\") " Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.525463 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc3413d-60d0-477c-a252-98ac28898260-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbc3413d-60d0-477c-a252-98ac28898260" (UID: "bbc3413d-60d0-477c-a252-98ac28898260"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.531687 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc3413d-60d0-477c-a252-98ac28898260-kube-api-access-dzrsn" (OuterVolumeSpecName: "kube-api-access-dzrsn") pod "bbc3413d-60d0-477c-a252-98ac28898260" (UID: "bbc3413d-60d0-477c-a252-98ac28898260"). InnerVolumeSpecName "kube-api-access-dzrsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.626250 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc3413d-60d0-477c-a252-98ac28898260-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.626287 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzrsn\" (UniqueName: \"kubernetes.io/projected/bbc3413d-60d0-477c-a252-98ac28898260-kube-api-access-dzrsn\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.704902 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.749932 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-99c48dff5-sj7f4_e62b2a29-1e10-4064-93da-24b6d5e88397/console/0.log" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.750009 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.832108 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvwsk\" (UniqueName: \"kubernetes.io/projected/e62b2a29-1e10-4064-93da-24b6d5e88397-kube-api-access-pvwsk\") pod \"e62b2a29-1e10-4064-93da-24b6d5e88397\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.832170 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-oauth-config\") pod \"e62b2a29-1e10-4064-93da-24b6d5e88397\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.832285 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-trusted-ca-bundle\") pod \"e62b2a29-1e10-4064-93da-24b6d5e88397\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.832319 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf306541-6ada-4bf3-8a32-a1de57044cf8-operator-scripts\") pod \"cf306541-6ada-4bf3-8a32-a1de57044cf8\" (UID: \"cf306541-6ada-4bf3-8a32-a1de57044cf8\") " Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.832340 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-console-config\") pod \"e62b2a29-1e10-4064-93da-24b6d5e88397\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.832369 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-service-ca\") pod \"e62b2a29-1e10-4064-93da-24b6d5e88397\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.832590 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-serving-cert\") pod \"e62b2a29-1e10-4064-93da-24b6d5e88397\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.832962 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zdnn\" (UniqueName: \"kubernetes.io/projected/cf306541-6ada-4bf3-8a32-a1de57044cf8-kube-api-access-7zdnn\") pod \"cf306541-6ada-4bf3-8a32-a1de57044cf8\" (UID: \"cf306541-6ada-4bf3-8a32-a1de57044cf8\") " Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.833881 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-oauth-serving-cert\") pod \"e62b2a29-1e10-4064-93da-24b6d5e88397\" (UID: \"e62b2a29-1e10-4064-93da-24b6d5e88397\") " Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.832952 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf306541-6ada-4bf3-8a32-a1de57044cf8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf306541-6ada-4bf3-8a32-a1de57044cf8" (UID: "cf306541-6ada-4bf3-8a32-a1de57044cf8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.833480 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e62b2a29-1e10-4064-93da-24b6d5e88397" (UID: "e62b2a29-1e10-4064-93da-24b6d5e88397"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.833508 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-service-ca" (OuterVolumeSpecName: "service-ca") pod "e62b2a29-1e10-4064-93da-24b6d5e88397" (UID: "e62b2a29-1e10-4064-93da-24b6d5e88397"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.833561 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-console-config" (OuterVolumeSpecName: "console-config") pod "e62b2a29-1e10-4064-93da-24b6d5e88397" (UID: "e62b2a29-1e10-4064-93da-24b6d5e88397"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.835004 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e62b2a29-1e10-4064-93da-24b6d5e88397" (UID: "e62b2a29-1e10-4064-93da-24b6d5e88397"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.844608 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62b2a29-1e10-4064-93da-24b6d5e88397-kube-api-access-pvwsk" (OuterVolumeSpecName: "kube-api-access-pvwsk") pod "e62b2a29-1e10-4064-93da-24b6d5e88397" (UID: "e62b2a29-1e10-4064-93da-24b6d5e88397"). InnerVolumeSpecName "kube-api-access-pvwsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.846543 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e62b2a29-1e10-4064-93da-24b6d5e88397" (UID: "e62b2a29-1e10-4064-93da-24b6d5e88397"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.846801 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" event={"ID":"bbc3413d-60d0-477c-a252-98ac28898260","Type":"ContainerDied","Data":"1f37f615715c4bd7e8ba60ff5f96e189d4c501faac0f6fefbfc824d6cc0cb98c"} Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.846828 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f37f615715c4bd7e8ba60ff5f96e189d4c501faac0f6fefbfc824d6cc0cb98c" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.846917 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e69e-account-create-update-qnj69" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.846872 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf306541-6ada-4bf3-8a32-a1de57044cf8-kube-api-access-7zdnn" (OuterVolumeSpecName: "kube-api-access-7zdnn") pod "cf306541-6ada-4bf3-8a32-a1de57044cf8" (UID: "cf306541-6ada-4bf3-8a32-a1de57044cf8"). InnerVolumeSpecName "kube-api-access-7zdnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.851062 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-99c48dff5-sj7f4_e62b2a29-1e10-4064-93da-24b6d5e88397/console/0.log" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.851123 4803 generic.go:334] "Generic (PLEG): container finished" podID="e62b2a29-1e10-4064-93da-24b6d5e88397" containerID="93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2" exitCode=2 Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.851190 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-99c48dff5-sj7f4" event={"ID":"e62b2a29-1e10-4064-93da-24b6d5e88397","Type":"ContainerDied","Data":"93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2"} Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.851220 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-99c48dff5-sj7f4" event={"ID":"e62b2a29-1e10-4064-93da-24b6d5e88397","Type":"ContainerDied","Data":"0c8a83fecdb0017674feb5f5972115987cf4f7b9bd8163111013fb89556df6a6"} Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.851238 4803 scope.go:117] "RemoveContainer" containerID="93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.851341 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-99c48dff5-sj7f4" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.853779 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" event={"ID":"cf306541-6ada-4bf3-8a32-a1de57044cf8","Type":"ContainerDied","Data":"225b931ae520f2b34abcd3010c226141cfad561ce9e07bcafa82dba64f15dee6"} Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.853818 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="225b931ae520f2b34abcd3010c226141cfad561ce9e07bcafa82dba64f15dee6" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.853996 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.856251 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e62b2a29-1e10-4064-93da-24b6d5e88397" (UID: "e62b2a29-1e10-4064-93da-24b6d5e88397"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.936163 4803 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.936429 4803 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.936437 4803 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.936449 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zdnn\" (UniqueName: \"kubernetes.io/projected/cf306541-6ada-4bf3-8a32-a1de57044cf8-kube-api-access-7zdnn\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.936458 4803 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.936466 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvwsk\" (UniqueName: \"kubernetes.io/projected/e62b2a29-1e10-4064-93da-24b6d5e88397-kube-api-access-pvwsk\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.936474 4803 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e62b2a29-1e10-4064-93da-24b6d5e88397-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.936485 4803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e62b2a29-1e10-4064-93da-24b6d5e88397-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:05 crc kubenswrapper[4803]: I0127 22:10:05.936495 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf306541-6ada-4bf3-8a32-a1de57044cf8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:06 crc kubenswrapper[4803]: I0127 22:10:06.193380 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-99c48dff5-sj7f4"] Jan 27 22:10:06 crc kubenswrapper[4803]: I0127 22:10:06.203210 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-99c48dff5-sj7f4"] Jan 27 22:10:06 crc kubenswrapper[4803]: I0127 22:10:06.319041 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62b2a29-1e10-4064-93da-24b6d5e88397" path="/var/lib/kubelet/pods/e62b2a29-1e10-4064-93da-24b6d5e88397/volumes" Jan 27 22:10:06 crc kubenswrapper[4803]: I0127 22:10:06.975114 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xfps2" podUID="3f1dc5cb-1275-4cf9-8c71-f9575161f73f" containerName="ovn-controller" probeResult="failure" output=< Jan 27 22:10:06 crc kubenswrapper[4803]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 22:10:06 crc kubenswrapper[4803]: > Jan 27 22:10:06 crc kubenswrapper[4803]: I0127 22:10:06.980454 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:10:06 crc kubenswrapper[4803]: I0127 22:10:06.986733 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5ch2x" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.150761 4803 scope.go:117] "RemoveContainer" containerID="93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2" Jan 27 22:10:07 crc kubenswrapper[4803]: E0127 22:10:07.151977 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2\": container with ID starting with 93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2 not found: ID does not exist" containerID="93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.152007 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2"} err="failed to get container status \"93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2\": rpc error: code = NotFound desc = could not find container \"93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2\": container with ID starting with 93c82633be5ad5fab577ef5dbafdbf80e617f0e0caf0b29028e9d19ee6da3fd2 not found: ID does not exist" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.226634 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xfps2-config-h7b47"] Jan 27 22:10:07 crc kubenswrapper[4803]: E0127 22:10:07.227080 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf306541-6ada-4bf3-8a32-a1de57044cf8" containerName="mariadb-database-create" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.227096 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf306541-6ada-4bf3-8a32-a1de57044cf8" containerName="mariadb-database-create" Jan 27 22:10:07 crc kubenswrapper[4803]: E0127 22:10:07.227107 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62b2a29-1e10-4064-93da-24b6d5e88397" containerName="console" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.227114 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62b2a29-1e10-4064-93da-24b6d5e88397" containerName="console" Jan 27 22:10:07 crc kubenswrapper[4803]: E0127 22:10:07.227135 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc3413d-60d0-477c-a252-98ac28898260" containerName="mariadb-account-create-update" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.227141 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc3413d-60d0-477c-a252-98ac28898260" containerName="mariadb-account-create-update" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.227326 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf306541-6ada-4bf3-8a32-a1de57044cf8" containerName="mariadb-database-create" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.227355 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc3413d-60d0-477c-a252-98ac28898260" containerName="mariadb-account-create-update" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.227363 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62b2a29-1e10-4064-93da-24b6d5e88397" containerName="console" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.228048 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.229608 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.244437 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xfps2-config-h7b47"] Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.252586 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-klv9j" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.260503 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-scripts\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.260604 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-log-ovn\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.260637 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.260665 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run-ovn\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.260682 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-additional-scripts\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.260812 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntvrv\" (UniqueName: \"kubernetes.io/projected/77919224-bce7-406a-9c69-18baf881e6c8-kube-api-access-ntvrv\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.365216 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phpcs\" (UniqueName: \"kubernetes.io/projected/d4dd1516-79e6-4f6a-98a9-b672312cc668-kube-api-access-phpcs\") pod \"d4dd1516-79e6-4f6a-98a9-b672312cc668\" (UID: \"d4dd1516-79e6-4f6a-98a9-b672312cc668\") " Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.365291 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4dd1516-79e6-4f6a-98a9-b672312cc668-operator-scripts\") pod \"d4dd1516-79e6-4f6a-98a9-b672312cc668\" (UID: \"d4dd1516-79e6-4f6a-98a9-b672312cc668\") " Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.365778 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-log-ovn\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.365832 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.365885 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run-ovn\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.365905 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-additional-scripts\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.366033 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntvrv\" (UniqueName: \"kubernetes.io/projected/77919224-bce7-406a-9c69-18baf881e6c8-kube-api-access-ntvrv\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.366117 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-scripts\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.367295 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run-ovn\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.368099 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4dd1516-79e6-4f6a-98a9-b672312cc668-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4dd1516-79e6-4f6a-98a9-b672312cc668" (UID: "d4dd1516-79e6-4f6a-98a9-b672312cc668"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.368411 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-additional-scripts\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.368489 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-log-ovn\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.368716 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.372259 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-scripts\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.381054 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4dd1516-79e6-4f6a-98a9-b672312cc668-kube-api-access-phpcs" (OuterVolumeSpecName: "kube-api-access-phpcs") pod "d4dd1516-79e6-4f6a-98a9-b672312cc668" (UID: "d4dd1516-79e6-4f6a-98a9-b672312cc668"). InnerVolumeSpecName "kube-api-access-phpcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.392206 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntvrv\" (UniqueName: \"kubernetes.io/projected/77919224-bce7-406a-9c69-18baf881e6c8-kube-api-access-ntvrv\") pod \"ovn-controller-xfps2-config-h7b47\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.470375 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phpcs\" (UniqueName: \"kubernetes.io/projected/d4dd1516-79e6-4f6a-98a9-b672312cc668-kube-api-access-phpcs\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.470672 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4dd1516-79e6-4f6a-98a9-b672312cc668-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.560605 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.881769 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"006465d9-12d6-4d2e-a02e-8a2669bdcbef","Type":"ContainerStarted","Data":"5f9e62f9ea8085f67cb42807553e370bf4079aff1c498804f5c245734544bdc2"} Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.885058 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-klv9j" event={"ID":"d4dd1516-79e6-4f6a-98a9-b672312cc668","Type":"ContainerDied","Data":"226924510eb59d5a69ba795ca3b429421ab0f32979081583556a4969d53fa67b"} Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.885100 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="226924510eb59d5a69ba795ca3b429421ab0f32979081583556a4969d53fa67b" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.885164 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-klv9j" Jan 27 22:10:07 crc kubenswrapper[4803]: I0127 22:10:07.925127 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.668393998 podStartE2EDuration="59.925105946s" podCreationTimestamp="2026-01-27 22:09:08 +0000 UTC" firstStartedPulling="2026-01-27 22:09:28.960591795 +0000 UTC m=+1321.376613494" lastFinishedPulling="2026-01-27 22:10:07.217303753 +0000 UTC m=+1359.633325442" observedRunningTime="2026-01-27 22:10:07.919365432 +0000 UTC m=+1360.335387141" watchObservedRunningTime="2026-01-27 22:10:07.925105946 +0000 UTC m=+1360.341127655" Jan 27 22:10:08 crc kubenswrapper[4803]: I0127 22:10:08.078990 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xfps2-config-h7b47"] Jan 27 22:10:08 crc kubenswrapper[4803]: W0127 22:10:08.083401 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77919224_bce7_406a_9c69_18baf881e6c8.slice/crio-402ca1dbd90e56843fb802f012e20d9b5af0a2bd82dfeda559b64f352607f414 WatchSource:0}: Error finding container 402ca1dbd90e56843fb802f012e20d9b5af0a2bd82dfeda559b64f352607f414: Status 404 returned error can't find the container with id 402ca1dbd90e56843fb802f012e20d9b5af0a2bd82dfeda559b64f352607f414 Jan 27 22:10:08 crc kubenswrapper[4803]: I0127 22:10:08.897935 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xfps2-config-h7b47" event={"ID":"77919224-bce7-406a-9c69-18baf881e6c8","Type":"ContainerStarted","Data":"402ca1dbd90e56843fb802f012e20d9b5af0a2bd82dfeda559b64f352607f414"} Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.044940 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.176890 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 22:10:09 crc kubenswrapper[4803]: E0127 22:10:09.177404 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4dd1516-79e6-4f6a-98a9-b672312cc668" containerName="mariadb-account-create-update" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.177424 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4dd1516-79e6-4f6a-98a9-b672312cc668" containerName="mariadb-account-create-update" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.177718 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4dd1516-79e6-4f6a-98a9-b672312cc668" containerName="mariadb-account-create-update" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.178597 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.183380 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.193876 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.203975 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " pod="openstack/mysqld-exporter-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.204075 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84l6c\" (UniqueName: \"kubernetes.io/projected/4414c4e3-3baa-4339-95de-5dc17a42210b-kube-api-access-84l6c\") pod \"mysqld-exporter-0\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " pod="openstack/mysqld-exporter-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.204122 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-config-data\") pod \"mysqld-exporter-0\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " pod="openstack/mysqld-exporter-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.305968 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-config-data\") pod \"mysqld-exporter-0\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " pod="openstack/mysqld-exporter-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.307599 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " pod="openstack/mysqld-exporter-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.307751 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84l6c\" (UniqueName: \"kubernetes.io/projected/4414c4e3-3baa-4339-95de-5dc17a42210b-kube-api-access-84l6c\") pod \"mysqld-exporter-0\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " pod="openstack/mysqld-exporter-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.327770 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-config-data\") pod \"mysqld-exporter-0\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " pod="openstack/mysqld-exporter-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.327945 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " pod="openstack/mysqld-exporter-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.329811 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84l6c\" (UniqueName: \"kubernetes.io/projected/4414c4e3-3baa-4339-95de-5dc17a42210b-kube-api-access-84l6c\") pod \"mysqld-exporter-0\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " pod="openstack/mysqld-exporter-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.508273 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-klv9j"] Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.515203 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.516123 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-klv9j"] Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.792884 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.792931 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.795750 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.924373 4803 generic.go:334] "Generic (PLEG): container finished" podID="77919224-bce7-406a-9c69-18baf881e6c8" containerID="35213ba6fcff817ece2d58f0ad20116c4c080029024f22b6cb2454e2ce320988" exitCode=0 Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.925834 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xfps2-config-h7b47" event={"ID":"77919224-bce7-406a-9c69-18baf881e6c8","Type":"ContainerDied","Data":"35213ba6fcff817ece2d58f0ad20116c4c080029024f22b6cb2454e2ce320988"} Jan 27 22:10:09 crc kubenswrapper[4803]: I0127 22:10:09.928355 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:10 crc kubenswrapper[4803]: I0127 22:10:10.058652 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 22:10:10 crc kubenswrapper[4803]: I0127 22:10:10.319919 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4dd1516-79e6-4f6a-98a9-b672312cc668" path="/var/lib/kubelet/pods/d4dd1516-79e6-4f6a-98a9-b672312cc668/volumes" Jan 27 22:10:10 crc kubenswrapper[4803]: I0127 22:10:10.933421 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4414c4e3-3baa-4339-95de-5dc17a42210b","Type":"ContainerStarted","Data":"3b5022d02f87d5a99a6d37ba681dc3432312c489260542280c42dc5299892437"} Jan 27 22:10:10 crc kubenswrapper[4803]: I0127 22:10:10.935347 4803 generic.go:334] "Generic (PLEG): container finished" podID="33e4fbb3-3248-49d9-8302-cf3f0bc8ef00" containerID="58f1bf37226e43a1d4fae1dbdc21f2a6c510ec060654d8306422fa0ef3062419" exitCode=0 Jan 27 22:10:10 crc kubenswrapper[4803]: I0127 22:10:10.935417 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-96md4" event={"ID":"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00","Type":"ContainerDied","Data":"58f1bf37226e43a1d4fae1dbdc21f2a6c510ec060654d8306422fa0ef3062419"} Jan 27 22:10:11 crc kubenswrapper[4803]: I0127 22:10:11.899585 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xfps2" Jan 27 22:10:12 crc kubenswrapper[4803]: I0127 22:10:12.650364 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 22:10:12 crc kubenswrapper[4803]: I0127 22:10:12.876857 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="254b4a13-ff42-41cb-ae18-373ad9cfc583" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 27 22:10:12 crc kubenswrapper[4803]: I0127 22:10:12.951508 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="prometheus" containerID="cri-o://1d16d2ec6950ee680547e996772b5f061effba959c9b6af212e7a394bfc5dc9f" gracePeriod=600 Jan 27 22:10:12 crc kubenswrapper[4803]: I0127 22:10:12.951569 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="thanos-sidecar" containerID="cri-o://5f9e62f9ea8085f67cb42807553e370bf4079aff1c498804f5c245734544bdc2" gracePeriod=600 Jan 27 22:10:12 crc kubenswrapper[4803]: I0127 22:10:12.951561 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="config-reloader" containerID="cri-o://3ab14d866946a49b52097f3eab160d5549a9b1215efbfc0611543f594392f497" gracePeriod=600 Jan 27 22:10:12 crc kubenswrapper[4803]: I0127 22:10:12.982552 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rwh8m"] Jan 27 22:10:12 crc kubenswrapper[4803]: I0127 22:10:12.983799 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rwh8m" Jan 27 22:10:12 crc kubenswrapper[4803]: I0127 22:10:12.990933 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.000387 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rwh8m"] Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.096454 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22e2e1-6b5c-4535-9e81-29559a44cd40-operator-scripts\") pod \"root-account-create-update-rwh8m\" (UID: \"5e22e2e1-6b5c-4535-9e81-29559a44cd40\") " pod="openstack/root-account-create-update-rwh8m" Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.096540 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24g8t\" (UniqueName: \"kubernetes.io/projected/5e22e2e1-6b5c-4535-9e81-29559a44cd40-kube-api-access-24g8t\") pod \"root-account-create-update-rwh8m\" (UID: \"5e22e2e1-6b5c-4535-9e81-29559a44cd40\") " pod="openstack/root-account-create-update-rwh8m" Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.198684 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24g8t\" (UniqueName: \"kubernetes.io/projected/5e22e2e1-6b5c-4535-9e81-29559a44cd40-kube-api-access-24g8t\") pod \"root-account-create-update-rwh8m\" (UID: \"5e22e2e1-6b5c-4535-9e81-29559a44cd40\") " pod="openstack/root-account-create-update-rwh8m" Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.198895 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22e2e1-6b5c-4535-9e81-29559a44cd40-operator-scripts\") pod \"root-account-create-update-rwh8m\" (UID: \"5e22e2e1-6b5c-4535-9e81-29559a44cd40\") " pod="openstack/root-account-create-update-rwh8m" Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.199645 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22e2e1-6b5c-4535-9e81-29559a44cd40-operator-scripts\") pod \"root-account-create-update-rwh8m\" (UID: \"5e22e2e1-6b5c-4535-9e81-29559a44cd40\") " pod="openstack/root-account-create-update-rwh8m" Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.213984 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="993ad889-77c3-480e-8b5b-985766d488be" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.232823 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24g8t\" (UniqueName: \"kubernetes.io/projected/5e22e2e1-6b5c-4535-9e81-29559a44cd40-kube-api-access-24g8t\") pod \"root-account-create-update-rwh8m\" (UID: \"5e22e2e1-6b5c-4535-9e81-29559a44cd40\") " pod="openstack/root-account-create-update-rwh8m" Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.243350 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="50e2e860-a414-4c3e-888e-ac5873f13d2d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.330671 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rwh8m" Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.675491 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="73021b6c-3762-44f7-af8d-efd3ff4e4b7b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.962676 4803 generic.go:334] "Generic (PLEG): container finished" podID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerID="5f9e62f9ea8085f67cb42807553e370bf4079aff1c498804f5c245734544bdc2" exitCode=0 Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.962716 4803 generic.go:334] "Generic (PLEG): container finished" podID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerID="3ab14d866946a49b52097f3eab160d5549a9b1215efbfc0611543f594392f497" exitCode=0 Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.962726 4803 generic.go:334] "Generic (PLEG): container finished" podID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerID="1d16d2ec6950ee680547e996772b5f061effba959c9b6af212e7a394bfc5dc9f" exitCode=0 Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.962746 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"006465d9-12d6-4d2e-a02e-8a2669bdcbef","Type":"ContainerDied","Data":"5f9e62f9ea8085f67cb42807553e370bf4079aff1c498804f5c245734544bdc2"} Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.962772 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"006465d9-12d6-4d2e-a02e-8a2669bdcbef","Type":"ContainerDied","Data":"3ab14d866946a49b52097f3eab160d5549a9b1215efbfc0611543f594392f497"} Jan 27 22:10:13 crc kubenswrapper[4803]: I0127 22:10:13.962787 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"006465d9-12d6-4d2e-a02e-8a2669bdcbef","Type":"ContainerDied","Data":"1d16d2ec6950ee680547e996772b5f061effba959c9b6af212e7a394bfc5dc9f"} Jan 27 22:10:14 crc kubenswrapper[4803]: I0127 22:10:14.793725 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.139:9090/-/ready\": dial tcp 10.217.0.139:9090: connect: connection refused" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.276466 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.283797 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.421639 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-dispersionconf\") pod \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.421994 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-log-ovn\") pod \"77919224-bce7-406a-9c69-18baf881e6c8\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422013 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run-ovn\") pod \"77919224-bce7-406a-9c69-18baf881e6c8\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422099 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "77919224-bce7-406a-9c69-18baf881e6c8" (UID: "77919224-bce7-406a-9c69-18baf881e6c8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422126 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "77919224-bce7-406a-9c69-18baf881e6c8" (UID: "77919224-bce7-406a-9c69-18baf881e6c8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422169 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-scripts\") pod \"77919224-bce7-406a-9c69-18baf881e6c8\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422191 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-combined-ca-bundle\") pod \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422209 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-scripts\") pod \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422331 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntvrv\" (UniqueName: \"kubernetes.io/projected/77919224-bce7-406a-9c69-18baf881e6c8-kube-api-access-ntvrv\") pod \"77919224-bce7-406a-9c69-18baf881e6c8\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422354 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-ring-data-devices\") pod \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422375 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run\") pod \"77919224-bce7-406a-9c69-18baf881e6c8\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422393 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwxzr\" (UniqueName: \"kubernetes.io/projected/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-kube-api-access-mwxzr\") pod \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422442 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-etc-swift\") pod \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422520 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-additional-scripts\") pod \"77919224-bce7-406a-9c69-18baf881e6c8\" (UID: \"77919224-bce7-406a-9c69-18baf881e6c8\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.422567 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-swiftconf\") pod \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\" (UID: \"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.423067 4803 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.423084 4803 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.423208 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-scripts" (OuterVolumeSpecName: "scripts") pod "77919224-bce7-406a-9c69-18baf881e6c8" (UID: "77919224-bce7-406a-9c69-18baf881e6c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.423247 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run" (OuterVolumeSpecName: "var-run") pod "77919224-bce7-406a-9c69-18baf881e6c8" (UID: "77919224-bce7-406a-9c69-18baf881e6c8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.424538 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "77919224-bce7-406a-9c69-18baf881e6c8" (UID: "77919224-bce7-406a-9c69-18baf881e6c8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.425225 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00" (UID: "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.429220 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-kube-api-access-mwxzr" (OuterVolumeSpecName: "kube-api-access-mwxzr") pod "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00" (UID: "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00"). InnerVolumeSpecName "kube-api-access-mwxzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.432364 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77919224-bce7-406a-9c69-18baf881e6c8-kube-api-access-ntvrv" (OuterVolumeSpecName: "kube-api-access-ntvrv") pod "77919224-bce7-406a-9c69-18baf881e6c8" (UID: "77919224-bce7-406a-9c69-18baf881e6c8"). InnerVolumeSpecName "kube-api-access-ntvrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.434296 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00" (UID: "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.448693 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00" (UID: "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.465581 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-scripts" (OuterVolumeSpecName: "scripts") pod "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00" (UID: "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.469926 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00" (UID: "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.476223 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00" (UID: "33e4fbb3-3248-49d9-8302-cf3f0bc8ef00"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.525560 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.525589 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.525599 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.525611 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntvrv\" (UniqueName: \"kubernetes.io/projected/77919224-bce7-406a-9c69-18baf881e6c8-kube-api-access-ntvrv\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.525619 4803 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.525627 4803 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77919224-bce7-406a-9c69-18baf881e6c8-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.525636 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwxzr\" (UniqueName: \"kubernetes.io/projected/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-kube-api-access-mwxzr\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.525644 4803 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.525651 4803 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77919224-bce7-406a-9c69-18baf881e6c8-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.525659 4803 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.525667 4803 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/33e4fbb3-3248-49d9-8302-cf3f0bc8ef00-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.533308 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.626990 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnfcj\" (UniqueName: \"kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-kube-api-access-cnfcj\") pod \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.627111 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-2\") pod \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.627189 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-1\") pod \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.627219 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-thanos-prometheus-http-client-file\") pod \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.627284 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config-out\") pod \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.627313 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config\") pod \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.627359 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-web-config\") pod \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.627531 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\") pod \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.627583 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-0\") pod \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.627607 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-tls-assets\") pod \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\" (UID: \"006465d9-12d6-4d2e-a02e-8a2669bdcbef\") " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.628776 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "006465d9-12d6-4d2e-a02e-8a2669bdcbef" (UID: "006465d9-12d6-4d2e-a02e-8a2669bdcbef"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.629077 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "006465d9-12d6-4d2e-a02e-8a2669bdcbef" (UID: "006465d9-12d6-4d2e-a02e-8a2669bdcbef"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.630669 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "006465d9-12d6-4d2e-a02e-8a2669bdcbef" (UID: "006465d9-12d6-4d2e-a02e-8a2669bdcbef"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.631416 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-kube-api-access-cnfcj" (OuterVolumeSpecName: "kube-api-access-cnfcj") pod "006465d9-12d6-4d2e-a02e-8a2669bdcbef" (UID: "006465d9-12d6-4d2e-a02e-8a2669bdcbef"). InnerVolumeSpecName "kube-api-access-cnfcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.632872 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config" (OuterVolumeSpecName: "config") pod "006465d9-12d6-4d2e-a02e-8a2669bdcbef" (UID: "006465d9-12d6-4d2e-a02e-8a2669bdcbef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.633003 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "006465d9-12d6-4d2e-a02e-8a2669bdcbef" (UID: "006465d9-12d6-4d2e-a02e-8a2669bdcbef"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.634153 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config-out" (OuterVolumeSpecName: "config-out") pod "006465d9-12d6-4d2e-a02e-8a2669bdcbef" (UID: "006465d9-12d6-4d2e-a02e-8a2669bdcbef"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.637515 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "006465d9-12d6-4d2e-a02e-8a2669bdcbef" (UID: "006465d9-12d6-4d2e-a02e-8a2669bdcbef"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.652334 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "006465d9-12d6-4d2e-a02e-8a2669bdcbef" (UID: "006465d9-12d6-4d2e-a02e-8a2669bdcbef"). InnerVolumeSpecName "pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.684529 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-web-config" (OuterVolumeSpecName: "web-config") pod "006465d9-12d6-4d2e-a02e-8a2669bdcbef" (UID: "006465d9-12d6-4d2e-a02e-8a2669bdcbef"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.731465 4803 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\") on node \"crc\" " Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.731504 4803 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.731518 4803 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.731530 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnfcj\" (UniqueName: \"kubernetes.io/projected/006465d9-12d6-4d2e-a02e-8a2669bdcbef-kube-api-access-cnfcj\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.731544 4803 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.731555 4803 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/006465d9-12d6-4d2e-a02e-8a2669bdcbef-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.731568 4803 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.731579 4803 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.731589 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.731600 4803 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/006465d9-12d6-4d2e-a02e-8a2669bdcbef-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.747684 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rwh8m"] Jan 27 22:10:18 crc kubenswrapper[4803]: W0127 22:10:18.761908 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e22e2e1_6b5c_4535_9e81_29559a44cd40.slice/crio-13330cd1c865bf8982f0da938a0f7ade34274a50c5b586a1d65fd1a708f03939 WatchSource:0}: Error finding container 13330cd1c865bf8982f0da938a0f7ade34274a50c5b586a1d65fd1a708f03939: Status 404 returned error can't find the container with id 13330cd1c865bf8982f0da938a0f7ade34274a50c5b586a1d65fd1a708f03939 Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.773719 4803 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.773907 4803 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899") on node "crc" Jan 27 22:10:18 crc kubenswrapper[4803]: I0127 22:10:18.833594 4803 reconciler_common.go:293] "Volume detached for volume \"pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.010335 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qptdp" event={"ID":"a7065dfd-1cab-471d-9aa5-60cee3714a4e","Type":"ContainerStarted","Data":"014982ac8c7718c3705ede520e17600202487e7bf067affd5608ad34427786aa"} Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.013740 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-96md4" event={"ID":"33e4fbb3-3248-49d9-8302-cf3f0bc8ef00","Type":"ContainerDied","Data":"aefcf8ef30a5775942d112ce9ca06dbf9a74d517d6ad021289f7bb77a14bfc2b"} Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.013771 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aefcf8ef30a5775942d112ce9ca06dbf9a74d517d6ad021289f7bb77a14bfc2b" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.013814 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-96md4" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.025917 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"006465d9-12d6-4d2e-a02e-8a2669bdcbef","Type":"ContainerDied","Data":"0af98fc78732b1fc9499d916735ae5173a426b0a5326fd849f3f6579db3a299c"} Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.025974 4803 scope.go:117] "RemoveContainer" containerID="5f9e62f9ea8085f67cb42807553e370bf4079aff1c498804f5c245734544bdc2" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.025983 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.036299 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-qptdp" podStartSLOduration=2.888057309 podStartE2EDuration="18.0362649s" podCreationTimestamp="2026-01-27 22:10:01 +0000 UTC" firstStartedPulling="2026-01-27 22:10:03.028597309 +0000 UTC m=+1355.444619008" lastFinishedPulling="2026-01-27 22:10:18.1768049 +0000 UTC m=+1370.592826599" observedRunningTime="2026-01-27 22:10:19.030007662 +0000 UTC m=+1371.446029371" watchObservedRunningTime="2026-01-27 22:10:19.0362649 +0000 UTC m=+1371.452286599" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.037677 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4414c4e3-3baa-4339-95de-5dc17a42210b","Type":"ContainerStarted","Data":"e9041aa0adedea8c6f825f569298768e7816db515ada829c7de17f0e951bfa97"} Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.045955 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xfps2-config-h7b47" event={"ID":"77919224-bce7-406a-9c69-18baf881e6c8","Type":"ContainerDied","Data":"402ca1dbd90e56843fb802f012e20d9b5af0a2bd82dfeda559b64f352607f414"} Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.046006 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="402ca1dbd90e56843fb802f012e20d9b5af0a2bd82dfeda559b64f352607f414" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.046073 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-h7b47" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.052329 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rwh8m" event={"ID":"5e22e2e1-6b5c-4535-9e81-29559a44cd40","Type":"ContainerStarted","Data":"37660c0d8c80dda9bc70f19659f95645026b85aeba3541cd465b60b07560be08"} Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.052380 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rwh8m" event={"ID":"5e22e2e1-6b5c-4535-9e81-29559a44cd40","Type":"ContainerStarted","Data":"13330cd1c865bf8982f0da938a0f7ade34274a50c5b586a1d65fd1a708f03939"} Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.082657 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.002146345 podStartE2EDuration="10.08263004s" podCreationTimestamp="2026-01-27 22:10:09 +0000 UTC" firstStartedPulling="2026-01-27 22:10:10.074798954 +0000 UTC m=+1362.490820643" lastFinishedPulling="2026-01-27 22:10:18.155282619 +0000 UTC m=+1370.571304338" observedRunningTime="2026-01-27 22:10:19.060807612 +0000 UTC m=+1371.476829321" watchObservedRunningTime="2026-01-27 22:10:19.08263004 +0000 UTC m=+1371.498651739" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.088286 4803 scope.go:117] "RemoveContainer" containerID="3ab14d866946a49b52097f3eab160d5549a9b1215efbfc0611543f594392f497" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.146222 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-rwh8m" podStartSLOduration=7.146201432 podStartE2EDuration="7.146201432s" podCreationTimestamp="2026-01-27 22:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:19.08375201 +0000 UTC m=+1371.499773709" watchObservedRunningTime="2026-01-27 22:10:19.146201432 +0000 UTC m=+1371.562223131" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.200826 4803 scope.go:117] "RemoveContainer" containerID="1d16d2ec6950ee680547e996772b5f061effba959c9b6af212e7a394bfc5dc9f" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.213637 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.240129 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.290042 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 22:10:19 crc kubenswrapper[4803]: E0127 22:10:19.290825 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e4fbb3-3248-49d9-8302-cf3f0bc8ef00" containerName="swift-ring-rebalance" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.290837 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e4fbb3-3248-49d9-8302-cf3f0bc8ef00" containerName="swift-ring-rebalance" Jan 27 22:10:19 crc kubenswrapper[4803]: E0127 22:10:19.290878 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="config-reloader" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.290885 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="config-reloader" Jan 27 22:10:19 crc kubenswrapper[4803]: E0127 22:10:19.290896 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="thanos-sidecar" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.290903 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="thanos-sidecar" Jan 27 22:10:19 crc kubenswrapper[4803]: E0127 22:10:19.290911 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="init-config-reloader" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.290918 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="init-config-reloader" Jan 27 22:10:19 crc kubenswrapper[4803]: E0127 22:10:19.290933 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77919224-bce7-406a-9c69-18baf881e6c8" containerName="ovn-config" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.290939 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="77919224-bce7-406a-9c69-18baf881e6c8" containerName="ovn-config" Jan 27 22:10:19 crc kubenswrapper[4803]: E0127 22:10:19.290947 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="prometheus" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.290952 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="prometheus" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.291142 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="prometheus" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.291155 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="77919224-bce7-406a-9c69-18baf881e6c8" containerName="ovn-config" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.291177 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="thanos-sidecar" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.291187 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" containerName="config-reloader" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.291199 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e4fbb3-3248-49d9-8302-cf3f0bc8ef00" containerName="swift-ring-rebalance" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.292915 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.300532 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.300549 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.300627 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.300901 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.300967 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.300627 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.300673 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nxgns" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.305544 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.308060 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.308344 4803 scope.go:117] "RemoveContainer" containerID="64a4d8d38614f6fe156a56ec2cc98eb8d14dedc403fe50c59b65d5eb8ed368ae" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.322646 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.496486 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.496592 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.496623 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.496647 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.496702 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.496736 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.496761 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-config\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.507954 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.508199 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2r8l\" (UniqueName: \"kubernetes.io/projected/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-kube-api-access-v2r8l\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.508341 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.508388 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.508414 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.508435 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.509528 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xfps2-config-h7b47"] Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.519796 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xfps2-config-h7b47"] Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.536002 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xfps2-config-lqpcc"] Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.537790 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.539992 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.549363 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xfps2-config-lqpcc"] Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610136 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610187 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-additional-scripts\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610237 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2r8l\" (UniqueName: \"kubernetes.io/projected/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-kube-api-access-v2r8l\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610274 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-scripts\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610294 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610316 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610338 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610360 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610386 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-log-ovn\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610404 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610430 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610502 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610525 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610542 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610581 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610616 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610633 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-config\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610654 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrr2f\" (UniqueName: \"kubernetes.io/projected/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-kube-api-access-jrr2f\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.610688 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run-ovn\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.612307 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.612478 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.612612 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.617497 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.617775 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.617826 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/08c67f674327cf14c0159546d65f5dd7b019eaac71000ad86f5fa5ecad0cfcfa/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.618018 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.618033 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.618529 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.619065 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.619938 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.624970 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-config\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.631258 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.631584 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2r8l\" (UniqueName: \"kubernetes.io/projected/f9122f89-a56c-47d7-ad05-9aab6acdcc2f-kube-api-access-v2r8l\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.661160 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e901c9a0-a477-4dd9-9007-c2ab1043a899\") pod \"prometheus-metric-storage-0\" (UID: \"f9122f89-a56c-47d7-ad05-9aab6acdcc2f\") " pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.713663 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.714210 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrr2f\" (UniqueName: \"kubernetes.io/projected/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-kube-api-access-jrr2f\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.714290 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run-ovn\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.714326 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-additional-scripts\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.714386 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-scripts\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.714428 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-log-ovn\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.714447 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.714531 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run-ovn\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.714729 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-log-ovn\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.714771 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.715354 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-additional-scripts\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.716403 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-scripts\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:19 crc kubenswrapper[4803]: I0127 22:10:19.730658 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrr2f\" (UniqueName: \"kubernetes.io/projected/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-kube-api-access-jrr2f\") pod \"ovn-controller-xfps2-config-lqpcc\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:20 crc kubenswrapper[4803]: I0127 22:10:20.023281 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:20 crc kubenswrapper[4803]: I0127 22:10:20.066623 4803 generic.go:334] "Generic (PLEG): container finished" podID="5e22e2e1-6b5c-4535-9e81-29559a44cd40" containerID="37660c0d8c80dda9bc70f19659f95645026b85aeba3541cd465b60b07560be08" exitCode=0 Jan 27 22:10:20 crc kubenswrapper[4803]: I0127 22:10:20.066749 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rwh8m" event={"ID":"5e22e2e1-6b5c-4535-9e81-29559a44cd40","Type":"ContainerDied","Data":"37660c0d8c80dda9bc70f19659f95645026b85aeba3541cd465b60b07560be08"} Jan 27 22:10:20 crc kubenswrapper[4803]: I0127 22:10:20.188826 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 22:10:20 crc kubenswrapper[4803]: W0127 22:10:20.193025 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9122f89_a56c_47d7_ad05_9aab6acdcc2f.slice/crio-e42b399f5d1028e6ad61d35ed88172b41e674880290077aadceaa8a0f6af9879 WatchSource:0}: Error finding container e42b399f5d1028e6ad61d35ed88172b41e674880290077aadceaa8a0f6af9879: Status 404 returned error can't find the container with id e42b399f5d1028e6ad61d35ed88172b41e674880290077aadceaa8a0f6af9879 Jan 27 22:10:20 crc kubenswrapper[4803]: I0127 22:10:20.328071 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006465d9-12d6-4d2e-a02e-8a2669bdcbef" path="/var/lib/kubelet/pods/006465d9-12d6-4d2e-a02e-8a2669bdcbef/volumes" Jan 27 22:10:20 crc kubenswrapper[4803]: I0127 22:10:20.329260 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77919224-bce7-406a-9c69-18baf881e6c8" path="/var/lib/kubelet/pods/77919224-bce7-406a-9c69-18baf881e6c8/volumes" Jan 27 22:10:20 crc kubenswrapper[4803]: W0127 22:10:20.534317 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9ea4e85_1364_4a5a_962c_ef0ede1c831c.slice/crio-faf9e798796331aca9b5728d54e48f29ec25f741bd51cf1612d7123f39710771 WatchSource:0}: Error finding container faf9e798796331aca9b5728d54e48f29ec25f741bd51cf1612d7123f39710771: Status 404 returned error can't find the container with id faf9e798796331aca9b5728d54e48f29ec25f741bd51cf1612d7123f39710771 Jan 27 22:10:20 crc kubenswrapper[4803]: I0127 22:10:20.541185 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xfps2-config-lqpcc"] Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.081166 4803 generic.go:334] "Generic (PLEG): container finished" podID="a9ea4e85-1364-4a5a-962c-ef0ede1c831c" containerID="82bfc22bb9db9ea4a1413029c6f7c8b61c687318ccf27836bc6fc4b414138873" exitCode=0 Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.081212 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xfps2-config-lqpcc" event={"ID":"a9ea4e85-1364-4a5a-962c-ef0ede1c831c","Type":"ContainerDied","Data":"82bfc22bb9db9ea4a1413029c6f7c8b61c687318ccf27836bc6fc4b414138873"} Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.081264 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xfps2-config-lqpcc" event={"ID":"a9ea4e85-1364-4a5a-962c-ef0ede1c831c","Type":"ContainerStarted","Data":"faf9e798796331aca9b5728d54e48f29ec25f741bd51cf1612d7123f39710771"} Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.082828 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f9122f89-a56c-47d7-ad05-9aab6acdcc2f","Type":"ContainerStarted","Data":"e42b399f5d1028e6ad61d35ed88172b41e674880290077aadceaa8a0f6af9879"} Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.464614 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.478167 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72f06f5c-7c0f-4969-89a2-b16210f935c4-etc-swift\") pod \"swift-storage-0\" (UID: \"72f06f5c-7c0f-4969-89a2-b16210f935c4\") " pod="openstack/swift-storage-0" Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.556690 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rwh8m" Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.579538 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.668028 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24g8t\" (UniqueName: \"kubernetes.io/projected/5e22e2e1-6b5c-4535-9e81-29559a44cd40-kube-api-access-24g8t\") pod \"5e22e2e1-6b5c-4535-9e81-29559a44cd40\" (UID: \"5e22e2e1-6b5c-4535-9e81-29559a44cd40\") " Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.668324 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22e2e1-6b5c-4535-9e81-29559a44cd40-operator-scripts\") pod \"5e22e2e1-6b5c-4535-9e81-29559a44cd40\" (UID: \"5e22e2e1-6b5c-4535-9e81-29559a44cd40\") " Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.669129 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e22e2e1-6b5c-4535-9e81-29559a44cd40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e22e2e1-6b5c-4535-9e81-29559a44cd40" (UID: "5e22e2e1-6b5c-4535-9e81-29559a44cd40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.674450 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e22e2e1-6b5c-4535-9e81-29559a44cd40-kube-api-access-24g8t" (OuterVolumeSpecName: "kube-api-access-24g8t") pod "5e22e2e1-6b5c-4535-9e81-29559a44cd40" (UID: "5e22e2e1-6b5c-4535-9e81-29559a44cd40"). InnerVolumeSpecName "kube-api-access-24g8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.775590 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24g8t\" (UniqueName: \"kubernetes.io/projected/5e22e2e1-6b5c-4535-9e81-29559a44cd40-kube-api-access-24g8t\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:21 crc kubenswrapper[4803]: I0127 22:10:21.775622 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22e2e1-6b5c-4535-9e81-29559a44cd40-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.092126 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rwh8m" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.092115 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rwh8m" event={"ID":"5e22e2e1-6b5c-4535-9e81-29559a44cd40","Type":"ContainerDied","Data":"13330cd1c865bf8982f0da938a0f7ade34274a50c5b586a1d65fd1a708f03939"} Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.092595 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13330cd1c865bf8982f0da938a0f7ade34274a50c5b586a1d65fd1a708f03939" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.162679 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 22:10:22 crc kubenswrapper[4803]: W0127 22:10:22.192397 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72f06f5c_7c0f_4969_89a2_b16210f935c4.slice/crio-94120c6cfeed43a0c699dd51ad6dd379bf671e1cdbb6862c0688c697b0ab4a30 WatchSource:0}: Error finding container 94120c6cfeed43a0c699dd51ad6dd379bf671e1cdbb6862c0688c697b0ab4a30: Status 404 returned error can't find the container with id 94120c6cfeed43a0c699dd51ad6dd379bf671e1cdbb6862c0688c697b0ab4a30 Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.597986 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.690398 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrr2f\" (UniqueName: \"kubernetes.io/projected/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-kube-api-access-jrr2f\") pod \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.690474 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-additional-scripts\") pod \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.690613 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run\") pod \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.690680 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run-ovn\") pod \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.690768 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-log-ovn\") pod \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.690803 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-scripts\") pod \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\" (UID: \"a9ea4e85-1364-4a5a-962c-ef0ede1c831c\") " Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.690959 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run" (OuterVolumeSpecName: "var-run") pod "a9ea4e85-1364-4a5a-962c-ef0ede1c831c" (UID: "a9ea4e85-1364-4a5a-962c-ef0ede1c831c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.691635 4803 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.691757 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a9ea4e85-1364-4a5a-962c-ef0ede1c831c" (UID: "a9ea4e85-1364-4a5a-962c-ef0ede1c831c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.691983 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a9ea4e85-1364-4a5a-962c-ef0ede1c831c" (UID: "a9ea4e85-1364-4a5a-962c-ef0ede1c831c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.692050 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a9ea4e85-1364-4a5a-962c-ef0ede1c831c" (UID: "a9ea4e85-1364-4a5a-962c-ef0ede1c831c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.692254 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-scripts" (OuterVolumeSpecName: "scripts") pod "a9ea4e85-1364-4a5a-962c-ef0ede1c831c" (UID: "a9ea4e85-1364-4a5a-962c-ef0ede1c831c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.793229 4803 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.793265 4803 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.793274 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.793285 4803 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.877032 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.884908 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-kube-api-access-jrr2f" (OuterVolumeSpecName: "kube-api-access-jrr2f") pod "a9ea4e85-1364-4a5a-962c-ef0ede1c831c" (UID: "a9ea4e85-1364-4a5a-962c-ef0ede1c831c"). InnerVolumeSpecName "kube-api-access-jrr2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:22 crc kubenswrapper[4803]: I0127 22:10:22.896370 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrr2f\" (UniqueName: \"kubernetes.io/projected/a9ea4e85-1364-4a5a-962c-ef0ede1c831c-kube-api-access-jrr2f\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.107424 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xfps2-config-lqpcc" event={"ID":"a9ea4e85-1364-4a5a-962c-ef0ede1c831c","Type":"ContainerDied","Data":"faf9e798796331aca9b5728d54e48f29ec25f741bd51cf1612d7123f39710771"} Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.107753 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faf9e798796331aca9b5728d54e48f29ec25f741bd51cf1612d7123f39710771" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.107482 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-lqpcc" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.115672 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"94120c6cfeed43a0c699dd51ad6dd379bf671e1cdbb6862c0688c697b0ab4a30"} Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.212420 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="993ad889-77c3-480e-8b5b-985766d488be" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.242061 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.672681 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xfps2-config-lqpcc"] Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.675042 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.687092 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xfps2-config-lqpcc"] Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.735921 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xfps2-config-pxff9"] Jan 27 22:10:23 crc kubenswrapper[4803]: E0127 22:10:23.736693 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e22e2e1-6b5c-4535-9e81-29559a44cd40" containerName="mariadb-account-create-update" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.736786 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e22e2e1-6b5c-4535-9e81-29559a44cd40" containerName="mariadb-account-create-update" Jan 27 22:10:23 crc kubenswrapper[4803]: E0127 22:10:23.736908 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ea4e85-1364-4a5a-962c-ef0ede1c831c" containerName="ovn-config" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.736993 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ea4e85-1364-4a5a-962c-ef0ede1c831c" containerName="ovn-config" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.737359 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9ea4e85-1364-4a5a-962c-ef0ede1c831c" containerName="ovn-config" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.737460 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e22e2e1-6b5c-4535-9e81-29559a44cd40" containerName="mariadb-account-create-update" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.738454 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.742472 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.746346 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xfps2-config-pxff9"] Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.817754 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run-ovn\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.817985 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-scripts\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.818025 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.818053 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26hd6\" (UniqueName: \"kubernetes.io/projected/1db587c2-0a9b-4f48-b1e2-78a81aa24725-kube-api-access-26hd6\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.818109 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-log-ovn\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.818128 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-additional-scripts\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.920985 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-scripts\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.921086 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.921140 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26hd6\" (UniqueName: \"kubernetes.io/projected/1db587c2-0a9b-4f48-b1e2-78a81aa24725-kube-api-access-26hd6\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.921238 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-log-ovn\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.921271 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-additional-scripts\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.921507 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run-ovn\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.922038 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run-ovn\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.922425 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.922494 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-log-ovn\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.923561 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-additional-scripts\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.924612 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-scripts\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:23 crc kubenswrapper[4803]: I0127 22:10:23.950179 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26hd6\" (UniqueName: \"kubernetes.io/projected/1db587c2-0a9b-4f48-b1e2-78a81aa24725-kube-api-access-26hd6\") pod \"ovn-controller-xfps2-config-pxff9\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:24 crc kubenswrapper[4803]: I0127 22:10:24.130703 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"a90e60e96bf9898e1122b04353af4b2514a248131850eea5d553fdc196ac2acb"} Jan 27 22:10:24 crc kubenswrapper[4803]: I0127 22:10:24.130754 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"d7e2de6d9303c746622051eeb40a855c3c268e35af637c60cdbc5707b8a508fa"} Jan 27 22:10:24 crc kubenswrapper[4803]: I0127 22:10:24.133654 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f9122f89-a56c-47d7-ad05-9aab6acdcc2f","Type":"ContainerStarted","Data":"1b02fffc976b848e1c2cc8819cfb712e0638b843ba5684cb29eba57dad31206c"} Jan 27 22:10:24 crc kubenswrapper[4803]: I0127 22:10:24.220550 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:24 crc kubenswrapper[4803]: I0127 22:10:24.338047 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ea4e85-1364-4a5a-962c-ef0ede1c831c" path="/var/lib/kubelet/pods/a9ea4e85-1364-4a5a-962c-ef0ede1c831c/volumes" Jan 27 22:10:24 crc kubenswrapper[4803]: I0127 22:10:24.606088 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rwh8m"] Jan 27 22:10:24 crc kubenswrapper[4803]: I0127 22:10:24.616243 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rwh8m"] Jan 27 22:10:24 crc kubenswrapper[4803]: I0127 22:10:24.756641 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xfps2-config-pxff9"] Jan 27 22:10:25 crc kubenswrapper[4803]: I0127 22:10:25.144992 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"64c9d20844244c1c1503b1c8660d720fd31b6e9f4d27d3ca8cfa26c2b599e307"} Jan 27 22:10:25 crc kubenswrapper[4803]: I0127 22:10:25.145350 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"4d2c7782a0dd87b00bdc41b08855eead400ea6ab8cbf732af88e91e689e58c17"} Jan 27 22:10:25 crc kubenswrapper[4803]: I0127 22:10:25.146434 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xfps2-config-pxff9" event={"ID":"1db587c2-0a9b-4f48-b1e2-78a81aa24725","Type":"ContainerStarted","Data":"7dbaa563d0a7019e5b80f922c0893e8cddb470f7154dd9146728f8a5b5c06a9e"} Jan 27 22:10:25 crc kubenswrapper[4803]: I0127 22:10:25.146498 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xfps2-config-pxff9" event={"ID":"1db587c2-0a9b-4f48-b1e2-78a81aa24725","Type":"ContainerStarted","Data":"d83cdc2798c679d2a1b2ebca629147bbc2f123fc146e30fc0a9546fa2fb78c67"} Jan 27 22:10:25 crc kubenswrapper[4803]: I0127 22:10:25.167223 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xfps2-config-pxff9" podStartSLOduration=2.167207322 podStartE2EDuration="2.167207322s" podCreationTimestamp="2026-01-27 22:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:25.164405426 +0000 UTC m=+1377.580427135" watchObservedRunningTime="2026-01-27 22:10:25.167207322 +0000 UTC m=+1377.583229021" Jan 27 22:10:26 crc kubenswrapper[4803]: I0127 22:10:26.161653 4803 generic.go:334] "Generic (PLEG): container finished" podID="1db587c2-0a9b-4f48-b1e2-78a81aa24725" containerID="7dbaa563d0a7019e5b80f922c0893e8cddb470f7154dd9146728f8a5b5c06a9e" exitCode=0 Jan 27 22:10:26 crc kubenswrapper[4803]: I0127 22:10:26.162391 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xfps2-config-pxff9" event={"ID":"1db587c2-0a9b-4f48-b1e2-78a81aa24725","Type":"ContainerDied","Data":"7dbaa563d0a7019e5b80f922c0893e8cddb470f7154dd9146728f8a5b5c06a9e"} Jan 27 22:10:26 crc kubenswrapper[4803]: I0127 22:10:26.194372 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"77e4ee268d07da3c5c05caa93d98d02ef14258a7c4ae5eb5b4d4f73335b41aeb"} Jan 27 22:10:26 crc kubenswrapper[4803]: I0127 22:10:26.194423 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"0a8297facea1f4fa96fccef0cec3dab56532546379ed5ffdfb893c72f2a55c31"} Jan 27 22:10:26 crc kubenswrapper[4803]: I0127 22:10:26.194439 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"377cba1cf0844ad6a63f720ff7cba9dfb3263f72f6119dc09e62f953afae7c6d"} Jan 27 22:10:26 crc kubenswrapper[4803]: I0127 22:10:26.319947 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e22e2e1-6b5c-4535-9e81-29559a44cd40" path="/var/lib/kubelet/pods/5e22e2e1-6b5c-4535-9e81-29559a44cd40/volumes" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.206050 4803 generic.go:334] "Generic (PLEG): container finished" podID="a7065dfd-1cab-471d-9aa5-60cee3714a4e" containerID="014982ac8c7718c3705ede520e17600202487e7bf067affd5608ad34427786aa" exitCode=0 Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.206119 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qptdp" event={"ID":"a7065dfd-1cab-471d-9aa5-60cee3714a4e","Type":"ContainerDied","Data":"014982ac8c7718c3705ede520e17600202487e7bf067affd5608ad34427786aa"} Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.212201 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"eacfab427194df342d7c11f8d9494a8f7cafecb705eb849c4000eeba2a60585f"} Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.518894 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.605966 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-additional-scripts\") pod \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.606088 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-log-ovn\") pod \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.606168 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-scripts\") pod \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.606196 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26hd6\" (UniqueName: \"kubernetes.io/projected/1db587c2-0a9b-4f48-b1e2-78a81aa24725-kube-api-access-26hd6\") pod \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.606237 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run-ovn\") pod \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.606288 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run\") pod \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\" (UID: \"1db587c2-0a9b-4f48-b1e2-78a81aa24725\") " Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.606788 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run" (OuterVolumeSpecName: "var-run") pod "1db587c2-0a9b-4f48-b1e2-78a81aa24725" (UID: "1db587c2-0a9b-4f48-b1e2-78a81aa24725"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.606974 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1db587c2-0a9b-4f48-b1e2-78a81aa24725" (UID: "1db587c2-0a9b-4f48-b1e2-78a81aa24725"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.606990 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1db587c2-0a9b-4f48-b1e2-78a81aa24725" (UID: "1db587c2-0a9b-4f48-b1e2-78a81aa24725"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.607459 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1db587c2-0a9b-4f48-b1e2-78a81aa24725" (UID: "1db587c2-0a9b-4f48-b1e2-78a81aa24725"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.607865 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-scripts" (OuterVolumeSpecName: "scripts") pod "1db587c2-0a9b-4f48-b1e2-78a81aa24725" (UID: "1db587c2-0a9b-4f48-b1e2-78a81aa24725"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.611088 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db587c2-0a9b-4f48-b1e2-78a81aa24725-kube-api-access-26hd6" (OuterVolumeSpecName: "kube-api-access-26hd6") pod "1db587c2-0a9b-4f48-b1e2-78a81aa24725" (UID: "1db587c2-0a9b-4f48-b1e2-78a81aa24725"). InnerVolumeSpecName "kube-api-access-26hd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.709098 4803 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.709359 4803 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.709452 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1db587c2-0a9b-4f48-b1e2-78a81aa24725-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.709526 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26hd6\" (UniqueName: \"kubernetes.io/projected/1db587c2-0a9b-4f48-b1e2-78a81aa24725-kube-api-access-26hd6\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.709606 4803 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.709688 4803 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1db587c2-0a9b-4f48-b1e2-78a81aa24725-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.833754 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xfps2-config-pxff9"] Jan 27 22:10:27 crc kubenswrapper[4803]: I0127 22:10:27.841892 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xfps2-config-pxff9"] Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.221808 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d83cdc2798c679d2a1b2ebca629147bbc2f123fc146e30fc0a9546fa2fb78c67" Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.222097 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xfps2-config-pxff9" Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.230663 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"e4d1de95eed87922a554baf4785e882ca7c2763ec8eac6aa0279a975087982ef"} Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.230711 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"acbbc35ed736e7269218e348f7349a7c0d67ac3461919dccb97996b063b6e1ba"} Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.230725 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"51594356ebbc6320c979da9900d3f768b7a0401039c82c4ec89ff9c3dec6fdc4"} Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.230736 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"55592d4f77ec4fdf048153716b991f5ca1411cfaa72361c9748e2dbdde8ba899"} Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.337918 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db587c2-0a9b-4f48-b1e2-78a81aa24725" path="/var/lib/kubelet/pods/1db587c2-0a9b-4f48-b1e2-78a81aa24725/volumes" Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.746862 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.836511 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-db-sync-config-data\") pod \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.837074 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w8qk\" (UniqueName: \"kubernetes.io/projected/a7065dfd-1cab-471d-9aa5-60cee3714a4e-kube-api-access-6w8qk\") pod \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.837106 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-config-data\") pod \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.837202 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-combined-ca-bundle\") pod \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\" (UID: \"a7065dfd-1cab-471d-9aa5-60cee3714a4e\") " Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.841155 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a7065dfd-1cab-471d-9aa5-60cee3714a4e" (UID: "a7065dfd-1cab-471d-9aa5-60cee3714a4e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.843940 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7065dfd-1cab-471d-9aa5-60cee3714a4e-kube-api-access-6w8qk" (OuterVolumeSpecName: "kube-api-access-6w8qk") pod "a7065dfd-1cab-471d-9aa5-60cee3714a4e" (UID: "a7065dfd-1cab-471d-9aa5-60cee3714a4e"). InnerVolumeSpecName "kube-api-access-6w8qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.865380 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7065dfd-1cab-471d-9aa5-60cee3714a4e" (UID: "a7065dfd-1cab-471d-9aa5-60cee3714a4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.889619 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-config-data" (OuterVolumeSpecName: "config-data") pod "a7065dfd-1cab-471d-9aa5-60cee3714a4e" (UID: "a7065dfd-1cab-471d-9aa5-60cee3714a4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.939678 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.939741 4803 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.939751 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w8qk\" (UniqueName: \"kubernetes.io/projected/a7065dfd-1cab-471d-9aa5-60cee3714a4e-kube-api-access-6w8qk\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:28 crc kubenswrapper[4803]: I0127 22:10:28.939762 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7065dfd-1cab-471d-9aa5-60cee3714a4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.257757 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"d47e02c34358b8ab0fa6914d962a914fe7bc54e41e8d2c90d8e1de9e0c53875f"} Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.258462 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"d72f88d3b5e5a2e6b6c4979fbb11c17be04a299f09517da693086d559ded3f7d"} Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.258546 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72f06f5c-7c0f-4969-89a2-b16210f935c4","Type":"ContainerStarted","Data":"6849a9cef203c7451b5e8ac459b4b072cdc4663578542f89647f607debb47238"} Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.262952 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qptdp" event={"ID":"a7065dfd-1cab-471d-9aa5-60cee3714a4e","Type":"ContainerDied","Data":"66cda4f1f191be615f0ba89ef4e1bdf977d2d3fb0c5e1d5b85c112a9df27e538"} Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.262982 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66cda4f1f191be615f0ba89ef4e1bdf977d2d3fb0c5e1d5b85c112a9df27e538" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.263010 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qptdp" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.604751 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.630279334 podStartE2EDuration="41.60472511s" podCreationTimestamp="2026-01-27 22:09:48 +0000 UTC" firstStartedPulling="2026-01-27 22:10:22.194990059 +0000 UTC m=+1374.611011758" lastFinishedPulling="2026-01-27 22:10:27.169435835 +0000 UTC m=+1379.585457534" observedRunningTime="2026-01-27 22:10:29.313761639 +0000 UTC m=+1381.729783358" watchObservedRunningTime="2026-01-27 22:10:29.60472511 +0000 UTC m=+1382.020746819" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.628541 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-grgts"] Jan 27 22:10:29 crc kubenswrapper[4803]: E0127 22:10:29.629136 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db587c2-0a9b-4f48-b1e2-78a81aa24725" containerName="ovn-config" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.629154 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db587c2-0a9b-4f48-b1e2-78a81aa24725" containerName="ovn-config" Jan 27 22:10:29 crc kubenswrapper[4803]: E0127 22:10:29.629193 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7065dfd-1cab-471d-9aa5-60cee3714a4e" containerName="glance-db-sync" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.629202 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7065dfd-1cab-471d-9aa5-60cee3714a4e" containerName="glance-db-sync" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.629420 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db587c2-0a9b-4f48-b1e2-78a81aa24725" containerName="ovn-config" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.629448 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7065dfd-1cab-471d-9aa5-60cee3714a4e" containerName="glance-db-sync" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.630835 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.647470 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-grgts"] Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.660199 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-config\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.660270 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-dns-svc\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.660302 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.660370 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t622g\" (UniqueName: \"kubernetes.io/projected/b56c523f-bb72-4c1d-af0d-83d981023082-kube-api-access-t622g\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.660471 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.727729 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-297r9"] Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.729540 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-297r9" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.731591 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.746257 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-297r9"] Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.765502 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.765609 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76260834-5c9e-485d-bbe9-71f319b5a9a6-operator-scripts\") pod \"root-account-create-update-297r9\" (UID: \"76260834-5c9e-485d-bbe9-71f319b5a9a6\") " pod="openstack/root-account-create-update-297r9" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.765722 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-config\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.766970 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-dns-svc\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.770574 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.770802 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t622g\" (UniqueName: \"kubernetes.io/projected/b56c523f-bb72-4c1d-af0d-83d981023082-kube-api-access-t622g\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.770919 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz9rw\" (UniqueName: \"kubernetes.io/projected/76260834-5c9e-485d-bbe9-71f319b5a9a6-kube-api-access-xz9rw\") pod \"root-account-create-update-297r9\" (UID: \"76260834-5c9e-485d-bbe9-71f319b5a9a6\") " pod="openstack/root-account-create-update-297r9" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.772233 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-grgts"] Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.772827 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-dns-svc\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: E0127 22:10:29.773128 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-t622g ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-74dc88fc-grgts" podUID="b56c523f-bb72-4c1d-af0d-83d981023082" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.773294 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.773370 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-config\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.779334 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.791997 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hgl6x"] Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.793565 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.795741 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.809570 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hgl6x"] Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.809729 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t622g\" (UniqueName: \"kubernetes.io/projected/b56c523f-bb72-4c1d-af0d-83d981023082-kube-api-access-t622g\") pod \"dnsmasq-dns-74dc88fc-grgts\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.873372 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk942\" (UniqueName: \"kubernetes.io/projected/4f372552-a6b6-4446-ae72-d1a8370b514e-kube-api-access-sk942\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.873985 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.874132 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz9rw\" (UniqueName: \"kubernetes.io/projected/76260834-5c9e-485d-bbe9-71f319b5a9a6-kube-api-access-xz9rw\") pod \"root-account-create-update-297r9\" (UID: \"76260834-5c9e-485d-bbe9-71f319b5a9a6\") " pod="openstack/root-account-create-update-297r9" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.874251 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76260834-5c9e-485d-bbe9-71f319b5a9a6-operator-scripts\") pod \"root-account-create-update-297r9\" (UID: \"76260834-5c9e-485d-bbe9-71f319b5a9a6\") " pod="openstack/root-account-create-update-297r9" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.874330 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-config\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.874450 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.874540 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.874643 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.875167 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76260834-5c9e-485d-bbe9-71f319b5a9a6-operator-scripts\") pod \"root-account-create-update-297r9\" (UID: \"76260834-5c9e-485d-bbe9-71f319b5a9a6\") " pod="openstack/root-account-create-update-297r9" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.894162 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz9rw\" (UniqueName: \"kubernetes.io/projected/76260834-5c9e-485d-bbe9-71f319b5a9a6-kube-api-access-xz9rw\") pod \"root-account-create-update-297r9\" (UID: \"76260834-5c9e-485d-bbe9-71f319b5a9a6\") " pod="openstack/root-account-create-update-297r9" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.976021 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-config\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.976453 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.976493 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.976536 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.976562 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk942\" (UniqueName: \"kubernetes.io/projected/4f372552-a6b6-4446-ae72-d1a8370b514e-kube-api-access-sk942\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.976631 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.977326 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.977543 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.977586 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.977750 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-config\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.977792 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:29 crc kubenswrapper[4803]: I0127 22:10:29.994484 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk942\" (UniqueName: \"kubernetes.io/projected/4f372552-a6b6-4446-ae72-d1a8370b514e-kube-api-access-sk942\") pod \"dnsmasq-dns-5f59b8f679-hgl6x\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.072790 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-297r9" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.156269 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.289640 4803 generic.go:334] "Generic (PLEG): container finished" podID="f9122f89-a56c-47d7-ad05-9aab6acdcc2f" containerID="1b02fffc976b848e1c2cc8819cfb712e0638b843ba5684cb29eba57dad31206c" exitCode=0 Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.289986 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f9122f89-a56c-47d7-ad05-9aab6acdcc2f","Type":"ContainerDied","Data":"1b02fffc976b848e1c2cc8819cfb712e0638b843ba5684cb29eba57dad31206c"} Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.291925 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.311788 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.385857 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t622g\" (UniqueName: \"kubernetes.io/projected/b56c523f-bb72-4c1d-af0d-83d981023082-kube-api-access-t622g\") pod \"b56c523f-bb72-4c1d-af0d-83d981023082\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.385938 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-sb\") pod \"b56c523f-bb72-4c1d-af0d-83d981023082\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.385984 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-dns-svc\") pod \"b56c523f-bb72-4c1d-af0d-83d981023082\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.386145 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-nb\") pod \"b56c523f-bb72-4c1d-af0d-83d981023082\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.386164 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-config\") pod \"b56c523f-bb72-4c1d-af0d-83d981023082\" (UID: \"b56c523f-bb72-4c1d-af0d-83d981023082\") " Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.387954 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b56c523f-bb72-4c1d-af0d-83d981023082" (UID: "b56c523f-bb72-4c1d-af0d-83d981023082"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.389707 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-config" (OuterVolumeSpecName: "config") pod "b56c523f-bb72-4c1d-af0d-83d981023082" (UID: "b56c523f-bb72-4c1d-af0d-83d981023082"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.389728 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b56c523f-bb72-4c1d-af0d-83d981023082" (UID: "b56c523f-bb72-4c1d-af0d-83d981023082"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.389818 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b56c523f-bb72-4c1d-af0d-83d981023082" (UID: "b56c523f-bb72-4c1d-af0d-83d981023082"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.389961 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b56c523f-bb72-4c1d-af0d-83d981023082-kube-api-access-t622g" (OuterVolumeSpecName: "kube-api-access-t622g") pod "b56c523f-bb72-4c1d-af0d-83d981023082" (UID: "b56c523f-bb72-4c1d-af0d-83d981023082"). InnerVolumeSpecName "kube-api-access-t622g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.488781 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.488818 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.488829 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t622g\" (UniqueName: \"kubernetes.io/projected/b56c523f-bb72-4c1d-af0d-83d981023082-kube-api-access-t622g\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.488840 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.488939 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b56c523f-bb72-4c1d-af0d-83d981023082-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.553180 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-297r9"] Jan 27 22:10:30 crc kubenswrapper[4803]: W0127 22:10:30.568395 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76260834_5c9e_485d_bbe9_71f319b5a9a6.slice/crio-2b923b8af792742f160f3eca3f6ee1ffca717ba976c8409a76474b0ee8acf31b WatchSource:0}: Error finding container 2b923b8af792742f160f3eca3f6ee1ffca717ba976c8409a76474b0ee8acf31b: Status 404 returned error can't find the container with id 2b923b8af792742f160f3eca3f6ee1ffca717ba976c8409a76474b0ee8acf31b Jan 27 22:10:30 crc kubenswrapper[4803]: I0127 22:10:30.668767 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hgl6x"] Jan 27 22:10:30 crc kubenswrapper[4803]: W0127 22:10:30.669554 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f372552_a6b6_4446_ae72_d1a8370b514e.slice/crio-d3655a7fbb654c2ad537f048c489e3d26ee82135832af78f350968ebe882d805 WatchSource:0}: Error finding container d3655a7fbb654c2ad537f048c489e3d26ee82135832af78f350968ebe882d805: Status 404 returned error can't find the container with id d3655a7fbb654c2ad537f048c489e3d26ee82135832af78f350968ebe882d805 Jan 27 22:10:31 crc kubenswrapper[4803]: I0127 22:10:31.300081 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f9122f89-a56c-47d7-ad05-9aab6acdcc2f","Type":"ContainerStarted","Data":"7001c826e9bbc0c893ef60888bdfbdc72c27cf8f49474bf2dad01616fa0b035f"} Jan 27 22:10:31 crc kubenswrapper[4803]: I0127 22:10:31.301942 4803 generic.go:334] "Generic (PLEG): container finished" podID="4f372552-a6b6-4446-ae72-d1a8370b514e" containerID="b2f70853bc245f6d4e2d7f66349934e5548fdba735cdc5e35572f5c864a5a7bc" exitCode=0 Jan 27 22:10:31 crc kubenswrapper[4803]: I0127 22:10:31.302011 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" event={"ID":"4f372552-a6b6-4446-ae72-d1a8370b514e","Type":"ContainerDied","Data":"b2f70853bc245f6d4e2d7f66349934e5548fdba735cdc5e35572f5c864a5a7bc"} Jan 27 22:10:31 crc kubenswrapper[4803]: I0127 22:10:31.302032 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" event={"ID":"4f372552-a6b6-4446-ae72-d1a8370b514e","Type":"ContainerStarted","Data":"d3655a7fbb654c2ad537f048c489e3d26ee82135832af78f350968ebe882d805"} Jan 27 22:10:31 crc kubenswrapper[4803]: I0127 22:10:31.306206 4803 generic.go:334] "Generic (PLEG): container finished" podID="76260834-5c9e-485d-bbe9-71f319b5a9a6" containerID="bc857b20d5a76fdfc21b04e16adf7eea6a40acc2c67047ecf75d9c7c06f953b6" exitCode=0 Jan 27 22:10:31 crc kubenswrapper[4803]: I0127 22:10:31.306343 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-297r9" event={"ID":"76260834-5c9e-485d-bbe9-71f319b5a9a6","Type":"ContainerDied","Data":"bc857b20d5a76fdfc21b04e16adf7eea6a40acc2c67047ecf75d9c7c06f953b6"} Jan 27 22:10:31 crc kubenswrapper[4803]: I0127 22:10:31.306400 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-297r9" event={"ID":"76260834-5c9e-485d-bbe9-71f319b5a9a6","Type":"ContainerStarted","Data":"2b923b8af792742f160f3eca3f6ee1ffca717ba976c8409a76474b0ee8acf31b"} Jan 27 22:10:31 crc kubenswrapper[4803]: I0127 22:10:31.306493 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-grgts" Jan 27 22:10:31 crc kubenswrapper[4803]: I0127 22:10:31.432952 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-grgts"] Jan 27 22:10:31 crc kubenswrapper[4803]: I0127 22:10:31.445466 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-grgts"] Jan 27 22:10:32 crc kubenswrapper[4803]: I0127 22:10:32.319178 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b56c523f-bb72-4c1d-af0d-83d981023082" path="/var/lib/kubelet/pods/b56c523f-bb72-4c1d-af0d-83d981023082/volumes" Jan 27 22:10:32 crc kubenswrapper[4803]: I0127 22:10:32.320101 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" event={"ID":"4f372552-a6b6-4446-ae72-d1a8370b514e","Type":"ContainerStarted","Data":"0bb285dc2f8321c8967cdbae618d0f4e33222f38b4dea29384bc4cfe8babc946"} Jan 27 22:10:32 crc kubenswrapper[4803]: I0127 22:10:32.343218 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" podStartSLOduration=3.343201213 podStartE2EDuration="3.343201213s" podCreationTimestamp="2026-01-27 22:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:32.338994861 +0000 UTC m=+1384.755016580" watchObservedRunningTime="2026-01-27 22:10:32.343201213 +0000 UTC m=+1384.759222912" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.072168 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-297r9" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.142772 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz9rw\" (UniqueName: \"kubernetes.io/projected/76260834-5c9e-485d-bbe9-71f319b5a9a6-kube-api-access-xz9rw\") pod \"76260834-5c9e-485d-bbe9-71f319b5a9a6\" (UID: \"76260834-5c9e-485d-bbe9-71f319b5a9a6\") " Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.143029 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76260834-5c9e-485d-bbe9-71f319b5a9a6-operator-scripts\") pod \"76260834-5c9e-485d-bbe9-71f319b5a9a6\" (UID: \"76260834-5c9e-485d-bbe9-71f319b5a9a6\") " Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.143702 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76260834-5c9e-485d-bbe9-71f319b5a9a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76260834-5c9e-485d-bbe9-71f319b5a9a6" (UID: "76260834-5c9e-485d-bbe9-71f319b5a9a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.162732 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76260834-5c9e-485d-bbe9-71f319b5a9a6-kube-api-access-xz9rw" (OuterVolumeSpecName: "kube-api-access-xz9rw") pod "76260834-5c9e-485d-bbe9-71f319b5a9a6" (UID: "76260834-5c9e-485d-bbe9-71f319b5a9a6"). InnerVolumeSpecName "kube-api-access-xz9rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.215053 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.248064 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz9rw\" (UniqueName: \"kubernetes.io/projected/76260834-5c9e-485d-bbe9-71f319b5a9a6-kube-api-access-xz9rw\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.248357 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76260834-5c9e-485d-bbe9-71f319b5a9a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.330790 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-297r9" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.330784 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-297r9" event={"ID":"76260834-5c9e-485d-bbe9-71f319b5a9a6","Type":"ContainerDied","Data":"2b923b8af792742f160f3eca3f6ee1ffca717ba976c8409a76474b0ee8acf31b"} Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.330900 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b923b8af792742f160f3eca3f6ee1ffca717ba976c8409a76474b0ee8acf31b" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.332135 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.531008 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-bxbff"] Jan 27 22:10:33 crc kubenswrapper[4803]: E0127 22:10:33.531406 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76260834-5c9e-485d-bbe9-71f319b5a9a6" containerName="mariadb-account-create-update" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.531423 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="76260834-5c9e-485d-bbe9-71f319b5a9a6" containerName="mariadb-account-create-update" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.531647 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="76260834-5c9e-485d-bbe9-71f319b5a9a6" containerName="mariadb-account-create-update" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.532362 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bxbff" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.553535 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bxbff"] Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.554757 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8f60e00-645d-465c-a973-55f9c9a1f2c1-operator-scripts\") pod \"cinder-db-create-bxbff\" (UID: \"c8f60e00-645d-465c-a973-55f9c9a1f2c1\") " pod="openstack/cinder-db-create-bxbff" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.554949 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wwwn\" (UniqueName: \"kubernetes.io/projected/c8f60e00-645d-465c-a973-55f9c9a1f2c1-kube-api-access-8wwwn\") pod \"cinder-db-create-bxbff\" (UID: \"c8f60e00-645d-465c-a973-55f9c9a1f2c1\") " pod="openstack/cinder-db-create-bxbff" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.656510 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8f60e00-645d-465c-a973-55f9c9a1f2c1-operator-scripts\") pod \"cinder-db-create-bxbff\" (UID: \"c8f60e00-645d-465c-a973-55f9c9a1f2c1\") " pod="openstack/cinder-db-create-bxbff" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.656646 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wwwn\" (UniqueName: \"kubernetes.io/projected/c8f60e00-645d-465c-a973-55f9c9a1f2c1-kube-api-access-8wwwn\") pod \"cinder-db-create-bxbff\" (UID: \"c8f60e00-645d-465c-a973-55f9c9a1f2c1\") " pod="openstack/cinder-db-create-bxbff" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.657329 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8f60e00-645d-465c-a973-55f9c9a1f2c1-operator-scripts\") pod \"cinder-db-create-bxbff\" (UID: \"c8f60e00-645d-465c-a973-55f9c9a1f2c1\") " pod="openstack/cinder-db-create-bxbff" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.672222 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wwwn\" (UniqueName: \"kubernetes.io/projected/c8f60e00-645d-465c-a973-55f9c9a1f2c1-kube-api-access-8wwwn\") pod \"cinder-db-create-bxbff\" (UID: \"c8f60e00-645d-465c-a973-55f9c9a1f2c1\") " pod="openstack/cinder-db-create-bxbff" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.731335 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bgszm"] Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.733045 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bgszm" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.749054 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-335a-account-create-update-bflvb"] Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.750622 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-335a-account-create-update-bflvb" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.758481 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5h9h\" (UniqueName: \"kubernetes.io/projected/2844562f-7d2e-435f-9bf1-58fe118e3345-kube-api-access-t5h9h\") pod \"barbican-db-create-bgszm\" (UID: \"2844562f-7d2e-435f-9bf1-58fe118e3345\") " pod="openstack/barbican-db-create-bgszm" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.758619 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2844562f-7d2e-435f-9bf1-58fe118e3345-operator-scripts\") pod \"barbican-db-create-bgszm\" (UID: \"2844562f-7d2e-435f-9bf1-58fe118e3345\") " pod="openstack/barbican-db-create-bgszm" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.758875 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bgszm"] Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.762339 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.780419 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-335a-account-create-update-bflvb"] Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.850290 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bxbff" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.860826 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5h9h\" (UniqueName: \"kubernetes.io/projected/2844562f-7d2e-435f-9bf1-58fe118e3345-kube-api-access-t5h9h\") pod \"barbican-db-create-bgszm\" (UID: \"2844562f-7d2e-435f-9bf1-58fe118e3345\") " pod="openstack/barbican-db-create-bgszm" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.860962 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fx6w\" (UniqueName: \"kubernetes.io/projected/065111e5-7fbf-4d19-b5b6-73fab236781b-kube-api-access-7fx6w\") pod \"barbican-335a-account-create-update-bflvb\" (UID: \"065111e5-7fbf-4d19-b5b6-73fab236781b\") " pod="openstack/barbican-335a-account-create-update-bflvb" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.861024 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2844562f-7d2e-435f-9bf1-58fe118e3345-operator-scripts\") pod \"barbican-db-create-bgszm\" (UID: \"2844562f-7d2e-435f-9bf1-58fe118e3345\") " pod="openstack/barbican-db-create-bgszm" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.861158 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065111e5-7fbf-4d19-b5b6-73fab236781b-operator-scripts\") pod \"barbican-335a-account-create-update-bflvb\" (UID: \"065111e5-7fbf-4d19-b5b6-73fab236781b\") " pod="openstack/barbican-335a-account-create-update-bflvb" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.862062 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2844562f-7d2e-435f-9bf1-58fe118e3345-operator-scripts\") pod \"barbican-db-create-bgszm\" (UID: \"2844562f-7d2e-435f-9bf1-58fe118e3345\") " pod="openstack/barbican-db-create-bgszm" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.867872 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0efc-account-create-update-6nv5m"] Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.869136 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0efc-account-create-update-6nv5m" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.879682 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.886315 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5h9h\" (UniqueName: \"kubernetes.io/projected/2844562f-7d2e-435f-9bf1-58fe118e3345-kube-api-access-t5h9h\") pod \"barbican-db-create-bgszm\" (UID: \"2844562f-7d2e-435f-9bf1-58fe118e3345\") " pod="openstack/barbican-db-create-bgszm" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.892869 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0efc-account-create-update-6nv5m"] Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.963758 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dlhf\" (UniqueName: \"kubernetes.io/projected/a244c95f-624e-4dca-833a-f290dd3c4465-kube-api-access-6dlhf\") pod \"cinder-0efc-account-create-update-6nv5m\" (UID: \"a244c95f-624e-4dca-833a-f290dd3c4465\") " pod="openstack/cinder-0efc-account-create-update-6nv5m" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.964249 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065111e5-7fbf-4d19-b5b6-73fab236781b-operator-scripts\") pod \"barbican-335a-account-create-update-bflvb\" (UID: \"065111e5-7fbf-4d19-b5b6-73fab236781b\") " pod="openstack/barbican-335a-account-create-update-bflvb" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.964425 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fx6w\" (UniqueName: \"kubernetes.io/projected/065111e5-7fbf-4d19-b5b6-73fab236781b-kube-api-access-7fx6w\") pod \"barbican-335a-account-create-update-bflvb\" (UID: \"065111e5-7fbf-4d19-b5b6-73fab236781b\") " pod="openstack/barbican-335a-account-create-update-bflvb" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.964469 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a244c95f-624e-4dca-833a-f290dd3c4465-operator-scripts\") pod \"cinder-0efc-account-create-update-6nv5m\" (UID: \"a244c95f-624e-4dca-833a-f290dd3c4465\") " pod="openstack/cinder-0efc-account-create-update-6nv5m" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.963889 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-75867"] Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.966125 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065111e5-7fbf-4d19-b5b6-73fab236781b-operator-scripts\") pod \"barbican-335a-account-create-update-bflvb\" (UID: \"065111e5-7fbf-4d19-b5b6-73fab236781b\") " pod="openstack/barbican-335a-account-create-update-bflvb" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.966775 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-75867" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.971237 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.971611 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wcv24" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.971730 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.971864 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.982721 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-82klm"] Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.984528 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-82klm" Jan 27 22:10:33 crc kubenswrapper[4803]: I0127 22:10:33.994912 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fx6w\" (UniqueName: \"kubernetes.io/projected/065111e5-7fbf-4d19-b5b6-73fab236781b-kube-api-access-7fx6w\") pod \"barbican-335a-account-create-update-bflvb\" (UID: \"065111e5-7fbf-4d19-b5b6-73fab236781b\") " pod="openstack/barbican-335a-account-create-update-bflvb" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.014249 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-82klm"] Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.041168 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-75867"] Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.075664 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmzvl\" (UniqueName: \"kubernetes.io/projected/124a1c8a-df45-4295-92cc-cb1708dcd2dc-kube-api-access-mmzvl\") pod \"heat-db-create-82klm\" (UID: \"124a1c8a-df45-4295-92cc-cb1708dcd2dc\") " pod="openstack/heat-db-create-82klm" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.075962 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a244c95f-624e-4dca-833a-f290dd3c4465-operator-scripts\") pod \"cinder-0efc-account-create-update-6nv5m\" (UID: \"a244c95f-624e-4dca-833a-f290dd3c4465\") " pod="openstack/cinder-0efc-account-create-update-6nv5m" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.076059 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-combined-ca-bundle\") pod \"keystone-db-sync-75867\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " pod="openstack/keystone-db-sync-75867" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.076201 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dlhf\" (UniqueName: \"kubernetes.io/projected/a244c95f-624e-4dca-833a-f290dd3c4465-kube-api-access-6dlhf\") pod \"cinder-0efc-account-create-update-6nv5m\" (UID: \"a244c95f-624e-4dca-833a-f290dd3c4465\") " pod="openstack/cinder-0efc-account-create-update-6nv5m" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.076347 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ckpd\" (UniqueName: \"kubernetes.io/projected/c533a2e0-2bd7-4ffd-8954-f83b562aa811-kube-api-access-2ckpd\") pod \"keystone-db-sync-75867\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " pod="openstack/keystone-db-sync-75867" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.076428 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-config-data\") pod \"keystone-db-sync-75867\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " pod="openstack/keystone-db-sync-75867" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.076474 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124a1c8a-df45-4295-92cc-cb1708dcd2dc-operator-scripts\") pod \"heat-db-create-82klm\" (UID: \"124a1c8a-df45-4295-92cc-cb1708dcd2dc\") " pod="openstack/heat-db-create-82klm" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.077755 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a244c95f-624e-4dca-833a-f290dd3c4465-operator-scripts\") pod \"cinder-0efc-account-create-update-6nv5m\" (UID: \"a244c95f-624e-4dca-833a-f290dd3c4465\") " pod="openstack/cinder-0efc-account-create-update-6nv5m" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.094816 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dlhf\" (UniqueName: \"kubernetes.io/projected/a244c95f-624e-4dca-833a-f290dd3c4465-kube-api-access-6dlhf\") pod \"cinder-0efc-account-create-update-6nv5m\" (UID: \"a244c95f-624e-4dca-833a-f290dd3c4465\") " pod="openstack/cinder-0efc-account-create-update-6nv5m" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.169091 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bgszm" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.178370 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-combined-ca-bundle\") pod \"keystone-db-sync-75867\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " pod="openstack/keystone-db-sync-75867" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.178484 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ckpd\" (UniqueName: \"kubernetes.io/projected/c533a2e0-2bd7-4ffd-8954-f83b562aa811-kube-api-access-2ckpd\") pod \"keystone-db-sync-75867\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " pod="openstack/keystone-db-sync-75867" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.178514 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-config-data\") pod \"keystone-db-sync-75867\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " pod="openstack/keystone-db-sync-75867" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.178537 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124a1c8a-df45-4295-92cc-cb1708dcd2dc-operator-scripts\") pod \"heat-db-create-82klm\" (UID: \"124a1c8a-df45-4295-92cc-cb1708dcd2dc\") " pod="openstack/heat-db-create-82klm" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.178569 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmzvl\" (UniqueName: \"kubernetes.io/projected/124a1c8a-df45-4295-92cc-cb1708dcd2dc-kube-api-access-mmzvl\") pod \"heat-db-create-82klm\" (UID: \"124a1c8a-df45-4295-92cc-cb1708dcd2dc\") " pod="openstack/heat-db-create-82klm" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.179688 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124a1c8a-df45-4295-92cc-cb1708dcd2dc-operator-scripts\") pod \"heat-db-create-82klm\" (UID: \"124a1c8a-df45-4295-92cc-cb1708dcd2dc\") " pod="openstack/heat-db-create-82klm" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.182937 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-config-data\") pod \"keystone-db-sync-75867\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " pod="openstack/keystone-db-sync-75867" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.196750 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-335a-account-create-update-bflvb" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.200114 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-combined-ca-bundle\") pod \"keystone-db-sync-75867\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " pod="openstack/keystone-db-sync-75867" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.203878 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ckpd\" (UniqueName: \"kubernetes.io/projected/c533a2e0-2bd7-4ffd-8954-f83b562aa811-kube-api-access-2ckpd\") pod \"keystone-db-sync-75867\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " pod="openstack/keystone-db-sync-75867" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.209403 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmzvl\" (UniqueName: \"kubernetes.io/projected/124a1c8a-df45-4295-92cc-cb1708dcd2dc-kube-api-access-mmzvl\") pod \"heat-db-create-82klm\" (UID: \"124a1c8a-df45-4295-92cc-cb1708dcd2dc\") " pod="openstack/heat-db-create-82klm" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.229804 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0efc-account-create-update-6nv5m" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.239357 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-krcg6"] Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.262168 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-krcg6"] Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.262276 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-krcg6" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.270046 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-75867" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.278311 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-03da-account-create-update-pk298"] Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.281023 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twf9h\" (UniqueName: \"kubernetes.io/projected/0a1aea96-4bc5-4809-bd77-3d7b319f274a-kube-api-access-twf9h\") pod \"neutron-db-create-krcg6\" (UID: \"0a1aea96-4bc5-4809-bd77-3d7b319f274a\") " pod="openstack/neutron-db-create-krcg6" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.283621 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a1aea96-4bc5-4809-bd77-3d7b319f274a-operator-scripts\") pod \"neutron-db-create-krcg6\" (UID: \"0a1aea96-4bc5-4809-bd77-3d7b319f274a\") " pod="openstack/neutron-db-create-krcg6" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.285957 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-03da-account-create-update-pk298" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.290480 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-03da-account-create-update-pk298"] Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.292668 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.298641 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-82klm" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.386551 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twf9h\" (UniqueName: \"kubernetes.io/projected/0a1aea96-4bc5-4809-bd77-3d7b319f274a-kube-api-access-twf9h\") pod \"neutron-db-create-krcg6\" (UID: \"0a1aea96-4bc5-4809-bd77-3d7b319f274a\") " pod="openstack/neutron-db-create-krcg6" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.386590 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nr5j\" (UniqueName: \"kubernetes.io/projected/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-kube-api-access-5nr5j\") pod \"heat-03da-account-create-update-pk298\" (UID: \"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33\") " pod="openstack/heat-03da-account-create-update-pk298" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.386643 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a1aea96-4bc5-4809-bd77-3d7b319f274a-operator-scripts\") pod \"neutron-db-create-krcg6\" (UID: \"0a1aea96-4bc5-4809-bd77-3d7b319f274a\") " pod="openstack/neutron-db-create-krcg6" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.386746 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-operator-scripts\") pod \"heat-03da-account-create-update-pk298\" (UID: \"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33\") " pod="openstack/heat-03da-account-create-update-pk298" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.388254 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a1aea96-4bc5-4809-bd77-3d7b319f274a-operator-scripts\") pod \"neutron-db-create-krcg6\" (UID: \"0a1aea96-4bc5-4809-bd77-3d7b319f274a\") " pod="openstack/neutron-db-create-krcg6" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.430516 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twf9h\" (UniqueName: \"kubernetes.io/projected/0a1aea96-4bc5-4809-bd77-3d7b319f274a-kube-api-access-twf9h\") pod \"neutron-db-create-krcg6\" (UID: \"0a1aea96-4bc5-4809-bd77-3d7b319f274a\") " pod="openstack/neutron-db-create-krcg6" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.447757 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-krcg6" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.477150 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f9122f89-a56c-47d7-ad05-9aab6acdcc2f","Type":"ContainerStarted","Data":"0ba2be6098dc5df1b09a53fc14ce6165312fe0618ce6b2eacd4181753c354cea"} Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.477188 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-91fa-account-create-update-2n55g"] Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.478363 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-91fa-account-create-update-2n55g"] Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.478380 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bxbff"] Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.478447 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-91fa-account-create-update-2n55g" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.483535 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.491814 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nr5j\" (UniqueName: \"kubernetes.io/projected/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-kube-api-access-5nr5j\") pod \"heat-03da-account-create-update-pk298\" (UID: \"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33\") " pod="openstack/heat-03da-account-create-update-pk298" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.492421 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-operator-scripts\") pod \"heat-03da-account-create-update-pk298\" (UID: \"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33\") " pod="openstack/heat-03da-account-create-update-pk298" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.493082 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-operator-scripts\") pod \"heat-03da-account-create-update-pk298\" (UID: \"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33\") " pod="openstack/heat-03da-account-create-update-pk298" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.521479 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nr5j\" (UniqueName: \"kubernetes.io/projected/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-kube-api-access-5nr5j\") pod \"heat-03da-account-create-update-pk298\" (UID: \"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33\") " pod="openstack/heat-03da-account-create-update-pk298" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.595968 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbnfk\" (UniqueName: \"kubernetes.io/projected/d0bd3943-fd52-4f19-8d60-b3e7446de42e-kube-api-access-nbnfk\") pod \"neutron-91fa-account-create-update-2n55g\" (UID: \"d0bd3943-fd52-4f19-8d60-b3e7446de42e\") " pod="openstack/neutron-91fa-account-create-update-2n55g" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.596072 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0bd3943-fd52-4f19-8d60-b3e7446de42e-operator-scripts\") pod \"neutron-91fa-account-create-update-2n55g\" (UID: \"d0bd3943-fd52-4f19-8d60-b3e7446de42e\") " pod="openstack/neutron-91fa-account-create-update-2n55g" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.699562 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbnfk\" (UniqueName: \"kubernetes.io/projected/d0bd3943-fd52-4f19-8d60-b3e7446de42e-kube-api-access-nbnfk\") pod \"neutron-91fa-account-create-update-2n55g\" (UID: \"d0bd3943-fd52-4f19-8d60-b3e7446de42e\") " pod="openstack/neutron-91fa-account-create-update-2n55g" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.699649 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0bd3943-fd52-4f19-8d60-b3e7446de42e-operator-scripts\") pod \"neutron-91fa-account-create-update-2n55g\" (UID: \"d0bd3943-fd52-4f19-8d60-b3e7446de42e\") " pod="openstack/neutron-91fa-account-create-update-2n55g" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.700463 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0bd3943-fd52-4f19-8d60-b3e7446de42e-operator-scripts\") pod \"neutron-91fa-account-create-update-2n55g\" (UID: \"d0bd3943-fd52-4f19-8d60-b3e7446de42e\") " pod="openstack/neutron-91fa-account-create-update-2n55g" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.719526 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbnfk\" (UniqueName: \"kubernetes.io/projected/d0bd3943-fd52-4f19-8d60-b3e7446de42e-kube-api-access-nbnfk\") pod \"neutron-91fa-account-create-update-2n55g\" (UID: \"d0bd3943-fd52-4f19-8d60-b3e7446de42e\") " pod="openstack/neutron-91fa-account-create-update-2n55g" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.764029 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-03da-account-create-update-pk298" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.820405 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-91fa-account-create-update-2n55g" Jan 27 22:10:34 crc kubenswrapper[4803]: I0127 22:10:34.904338 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bgszm"] Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.029683 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-335a-account-create-update-bflvb"] Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.120311 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-75867"] Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.134169 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-krcg6"] Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.145264 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0efc-account-create-update-6nv5m"] Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.155703 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-82klm"] Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.417120 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-335a-account-create-update-bflvb" event={"ID":"065111e5-7fbf-4d19-b5b6-73fab236781b","Type":"ContainerStarted","Data":"6c69fde3788de5976304a014deabc10146aa1685c9a8dde92b3542cf64661619"} Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.426499 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0efc-account-create-update-6nv5m" event={"ID":"a244c95f-624e-4dca-833a-f290dd3c4465","Type":"ContainerStarted","Data":"7a8ff981a3c302917d0e6750c8d79d0ed13711ceb4b23dce448215c0d9a26f8b"} Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.428712 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f9122f89-a56c-47d7-ad05-9aab6acdcc2f","Type":"ContainerStarted","Data":"c200e461ea31bdedab6427c54a9202058482162f3c6dcb33f226aa674ac9e8e1"} Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.429626 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-75867" event={"ID":"c533a2e0-2bd7-4ffd-8954-f83b562aa811","Type":"ContainerStarted","Data":"828bc32f51ba70b1906151ebb4a281941fc2eb95b4680a7a23181227d4e4fd5d"} Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.430501 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bgszm" event={"ID":"2844562f-7d2e-435f-9bf1-58fe118e3345","Type":"ContainerStarted","Data":"aa804546c3583153f22052bdfdd77615eaba2d575d07895934446ad721230122"} Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.434361 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-krcg6" event={"ID":"0a1aea96-4bc5-4809-bd77-3d7b319f274a","Type":"ContainerStarted","Data":"a0e9e08df48c5390d0186435f6b899226d41cf23ac30a780b4f473d4a2b7ff50"} Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.437115 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-82klm" event={"ID":"124a1c8a-df45-4295-92cc-cb1708dcd2dc","Type":"ContainerStarted","Data":"5ee2f24b5f76a52ba62b18b29847a0a0c2bf31204b3ed32bf54c56eeeef7a457"} Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.440363 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bxbff" event={"ID":"c8f60e00-645d-465c-a973-55f9c9a1f2c1","Type":"ContainerStarted","Data":"6cc2f0a725dc7ca49c9099c959b7266452faf78f19c52adc906f93748589e1f8"} Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.709484 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-03da-account-create-update-pk298"] Jan 27 22:10:35 crc kubenswrapper[4803]: I0127 22:10:35.871771 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-91fa-account-create-update-2n55g"] Jan 27 22:10:35 crc kubenswrapper[4803]: W0127 22:10:35.918823 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0bd3943_fd52_4f19_8d60_b3e7446de42e.slice/crio-05f7119100f582e7c8def9f91298943415dc561cba8109418e22e743fa27813a WatchSource:0}: Error finding container 05f7119100f582e7c8def9f91298943415dc561cba8109418e22e743fa27813a: Status 404 returned error can't find the container with id 05f7119100f582e7c8def9f91298943415dc561cba8109418e22e743fa27813a Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.463931 4803 generic.go:334] "Generic (PLEG): container finished" podID="a244c95f-624e-4dca-833a-f290dd3c4465" containerID="3b1d08e519ce4b18d9bf381e6539c6371c0c131b60eea2e31377809998d47349" exitCode=0 Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.464095 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0efc-account-create-update-6nv5m" event={"ID":"a244c95f-624e-4dca-833a-f290dd3c4465","Type":"ContainerDied","Data":"3b1d08e519ce4b18d9bf381e6539c6371c0c131b60eea2e31377809998d47349"} Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.468425 4803 generic.go:334] "Generic (PLEG): container finished" podID="2844562f-7d2e-435f-9bf1-58fe118e3345" containerID="61a403642726c8c1f6d042150bd4c6470628ebb1644ab30991d1922fb1182142" exitCode=0 Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.468477 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bgszm" event={"ID":"2844562f-7d2e-435f-9bf1-58fe118e3345","Type":"ContainerDied","Data":"61a403642726c8c1f6d042150bd4c6470628ebb1644ab30991d1922fb1182142"} Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.474151 4803 generic.go:334] "Generic (PLEG): container finished" podID="0a1aea96-4bc5-4809-bd77-3d7b319f274a" containerID="39c5bc2f05c3aecd54d32f1cb4b7f6a52a0d5714aab8d01a0b46c06c6cb05655" exitCode=0 Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.474240 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-krcg6" event={"ID":"0a1aea96-4bc5-4809-bd77-3d7b319f274a","Type":"ContainerDied","Data":"39c5bc2f05c3aecd54d32f1cb4b7f6a52a0d5714aab8d01a0b46c06c6cb05655"} Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.478626 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-91fa-account-create-update-2n55g" event={"ID":"d0bd3943-fd52-4f19-8d60-b3e7446de42e","Type":"ContainerStarted","Data":"05f7119100f582e7c8def9f91298943415dc561cba8109418e22e743fa27813a"} Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.483068 4803 generic.go:334] "Generic (PLEG): container finished" podID="124a1c8a-df45-4295-92cc-cb1708dcd2dc" containerID="c967edab7ae778868b0e850c719e6439de65f198089c3b57bd1bb3ad1fa68104" exitCode=0 Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.483137 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-82klm" event={"ID":"124a1c8a-df45-4295-92cc-cb1708dcd2dc","Type":"ContainerDied","Data":"c967edab7ae778868b0e850c719e6439de65f198089c3b57bd1bb3ad1fa68104"} Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.485038 4803 generic.go:334] "Generic (PLEG): container finished" podID="c8f60e00-645d-465c-a973-55f9c9a1f2c1" containerID="05c3391be8c429a9c693ca6b537e30cf8487c781c96c5aaa83d4317cc3b9a20b" exitCode=0 Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.485125 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bxbff" event={"ID":"c8f60e00-645d-465c-a973-55f9c9a1f2c1","Type":"ContainerDied","Data":"05c3391be8c429a9c693ca6b537e30cf8487c781c96c5aaa83d4317cc3b9a20b"} Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.489478 4803 generic.go:334] "Generic (PLEG): container finished" podID="0c0706a6-8dac-4c9e-8d69-04e89e9e0c33" containerID="9649a0ce2c19fe320f87eb38a9da445df5c05d990fc66e06f2f9aa49d45ae697" exitCode=0 Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.489620 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-03da-account-create-update-pk298" event={"ID":"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33","Type":"ContainerDied","Data":"9649a0ce2c19fe320f87eb38a9da445df5c05d990fc66e06f2f9aa49d45ae697"} Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.489650 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-03da-account-create-update-pk298" event={"ID":"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33","Type":"ContainerStarted","Data":"f6ea5e82e242679a1eacdc39c9a6536dc4c8705125bc76b98e9fad9fa9cd56d6"} Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.495369 4803 generic.go:334] "Generic (PLEG): container finished" podID="065111e5-7fbf-4d19-b5b6-73fab236781b" containerID="c2b63d960a165198b02cec6dde060f46f932f9173edc2c7727d09aa301b723b1" exitCode=0 Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.495479 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-335a-account-create-update-bflvb" event={"ID":"065111e5-7fbf-4d19-b5b6-73fab236781b","Type":"ContainerDied","Data":"c2b63d960a165198b02cec6dde060f46f932f9173edc2c7727d09aa301b723b1"} Jan 27 22:10:36 crc kubenswrapper[4803]: I0127 22:10:36.591648 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.591602676 podStartE2EDuration="17.591602676s" podCreationTimestamp="2026-01-27 22:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:36.582246843 +0000 UTC m=+1388.998268542" watchObservedRunningTime="2026-01-27 22:10:36.591602676 +0000 UTC m=+1389.007624375" Jan 27 22:10:37 crc kubenswrapper[4803]: I0127 22:10:37.510155 4803 generic.go:334] "Generic (PLEG): container finished" podID="d0bd3943-fd52-4f19-8d60-b3e7446de42e" containerID="b6d1d8d4c02b4138192c50cd2594d3a87dbb9c73d84442af56fdca4a434b077e" exitCode=0 Jan 27 22:10:37 crc kubenswrapper[4803]: I0127 22:10:37.510245 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-91fa-account-create-update-2n55g" event={"ID":"d0bd3943-fd52-4f19-8d60-b3e7446de42e","Type":"ContainerDied","Data":"b6d1d8d4c02b4138192c50cd2594d3a87dbb9c73d84442af56fdca4a434b077e"} Jan 27 22:10:39 crc kubenswrapper[4803]: I0127 22:10:39.714165 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.073024 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0efc-account-create-update-6nv5m" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.085103 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-335a-account-create-update-bflvb" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.117737 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bgszm" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.123435 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-82klm" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.130407 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-03da-account-create-update-pk298" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.140263 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bxbff" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.158411 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.170568 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-krcg6" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.172095 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fx6w\" (UniqueName: \"kubernetes.io/projected/065111e5-7fbf-4d19-b5b6-73fab236781b-kube-api-access-7fx6w\") pod \"065111e5-7fbf-4d19-b5b6-73fab236781b\" (UID: \"065111e5-7fbf-4d19-b5b6-73fab236781b\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.172133 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065111e5-7fbf-4d19-b5b6-73fab236781b-operator-scripts\") pod \"065111e5-7fbf-4d19-b5b6-73fab236781b\" (UID: \"065111e5-7fbf-4d19-b5b6-73fab236781b\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.172221 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a244c95f-624e-4dca-833a-f290dd3c4465-operator-scripts\") pod \"a244c95f-624e-4dca-833a-f290dd3c4465\" (UID: \"a244c95f-624e-4dca-833a-f290dd3c4465\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.172278 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nr5j\" (UniqueName: \"kubernetes.io/projected/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-kube-api-access-5nr5j\") pod \"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33\" (UID: \"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.172318 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-operator-scripts\") pod \"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33\" (UID: \"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.172385 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2844562f-7d2e-435f-9bf1-58fe118e3345-operator-scripts\") pod \"2844562f-7d2e-435f-9bf1-58fe118e3345\" (UID: \"2844562f-7d2e-435f-9bf1-58fe118e3345\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.172448 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dlhf\" (UniqueName: \"kubernetes.io/projected/a244c95f-624e-4dca-833a-f290dd3c4465-kube-api-access-6dlhf\") pod \"a244c95f-624e-4dca-833a-f290dd3c4465\" (UID: \"a244c95f-624e-4dca-833a-f290dd3c4465\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.172480 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124a1c8a-df45-4295-92cc-cb1708dcd2dc-operator-scripts\") pod \"124a1c8a-df45-4295-92cc-cb1708dcd2dc\" (UID: \"124a1c8a-df45-4295-92cc-cb1708dcd2dc\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.172591 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmzvl\" (UniqueName: \"kubernetes.io/projected/124a1c8a-df45-4295-92cc-cb1708dcd2dc-kube-api-access-mmzvl\") pod \"124a1c8a-df45-4295-92cc-cb1708dcd2dc\" (UID: \"124a1c8a-df45-4295-92cc-cb1708dcd2dc\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.172625 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5h9h\" (UniqueName: \"kubernetes.io/projected/2844562f-7d2e-435f-9bf1-58fe118e3345-kube-api-access-t5h9h\") pod \"2844562f-7d2e-435f-9bf1-58fe118e3345\" (UID: \"2844562f-7d2e-435f-9bf1-58fe118e3345\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.173130 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a244c95f-624e-4dca-833a-f290dd3c4465-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a244c95f-624e-4dca-833a-f290dd3c4465" (UID: "a244c95f-624e-4dca-833a-f290dd3c4465"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.173209 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c0706a6-8dac-4c9e-8d69-04e89e9e0c33" (UID: "0c0706a6-8dac-4c9e-8d69-04e89e9e0c33"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.173351 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065111e5-7fbf-4d19-b5b6-73fab236781b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "065111e5-7fbf-4d19-b5b6-73fab236781b" (UID: "065111e5-7fbf-4d19-b5b6-73fab236781b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.173374 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124a1c8a-df45-4295-92cc-cb1708dcd2dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "124a1c8a-df45-4295-92cc-cb1708dcd2dc" (UID: "124a1c8a-df45-4295-92cc-cb1708dcd2dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.174165 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124a1c8a-df45-4295-92cc-cb1708dcd2dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.174187 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065111e5-7fbf-4d19-b5b6-73fab236781b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.174198 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a244c95f-624e-4dca-833a-f290dd3c4465-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.174211 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.175329 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2844562f-7d2e-435f-9bf1-58fe118e3345-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2844562f-7d2e-435f-9bf1-58fe118e3345" (UID: "2844562f-7d2e-435f-9bf1-58fe118e3345"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.179612 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-91fa-account-create-update-2n55g" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.180062 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-kube-api-access-5nr5j" (OuterVolumeSpecName: "kube-api-access-5nr5j") pod "0c0706a6-8dac-4c9e-8d69-04e89e9e0c33" (UID: "0c0706a6-8dac-4c9e-8d69-04e89e9e0c33"). InnerVolumeSpecName "kube-api-access-5nr5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.185660 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065111e5-7fbf-4d19-b5b6-73fab236781b-kube-api-access-7fx6w" (OuterVolumeSpecName: "kube-api-access-7fx6w") pod "065111e5-7fbf-4d19-b5b6-73fab236781b" (UID: "065111e5-7fbf-4d19-b5b6-73fab236781b"). InnerVolumeSpecName "kube-api-access-7fx6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.192704 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2844562f-7d2e-435f-9bf1-58fe118e3345-kube-api-access-t5h9h" (OuterVolumeSpecName: "kube-api-access-t5h9h") pod "2844562f-7d2e-435f-9bf1-58fe118e3345" (UID: "2844562f-7d2e-435f-9bf1-58fe118e3345"). InnerVolumeSpecName "kube-api-access-t5h9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.213793 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124a1c8a-df45-4295-92cc-cb1708dcd2dc-kube-api-access-mmzvl" (OuterVolumeSpecName: "kube-api-access-mmzvl") pod "124a1c8a-df45-4295-92cc-cb1708dcd2dc" (UID: "124a1c8a-df45-4295-92cc-cb1708dcd2dc"). InnerVolumeSpecName "kube-api-access-mmzvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.223193 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a244c95f-624e-4dca-833a-f290dd3c4465-kube-api-access-6dlhf" (OuterVolumeSpecName: "kube-api-access-6dlhf") pod "a244c95f-624e-4dca-833a-f290dd3c4465" (UID: "a244c95f-624e-4dca-833a-f290dd3c4465"). InnerVolumeSpecName "kube-api-access-6dlhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.265752 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pjgqn"] Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.266104 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" podUID="4fafdbaa-01ec-42c3-afd2-5416c549677f" containerName="dnsmasq-dns" containerID="cri-o://609f34d79c9a61cf6ef2b7e79f12b036b9a6e2413a7c5943daf48b422ddb8ce8" gracePeriod=10 Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.276814 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a1aea96-4bc5-4809-bd77-3d7b319f274a-operator-scripts\") pod \"0a1aea96-4bc5-4809-bd77-3d7b319f274a\" (UID: \"0a1aea96-4bc5-4809-bd77-3d7b319f274a\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.277148 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0bd3943-fd52-4f19-8d60-b3e7446de42e-operator-scripts\") pod \"d0bd3943-fd52-4f19-8d60-b3e7446de42e\" (UID: \"d0bd3943-fd52-4f19-8d60-b3e7446de42e\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.277190 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wwwn\" (UniqueName: \"kubernetes.io/projected/c8f60e00-645d-465c-a973-55f9c9a1f2c1-kube-api-access-8wwwn\") pod \"c8f60e00-645d-465c-a973-55f9c9a1f2c1\" (UID: \"c8f60e00-645d-465c-a973-55f9c9a1f2c1\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.277249 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8f60e00-645d-465c-a973-55f9c9a1f2c1-operator-scripts\") pod \"c8f60e00-645d-465c-a973-55f9c9a1f2c1\" (UID: \"c8f60e00-645d-465c-a973-55f9c9a1f2c1\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.277316 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twf9h\" (UniqueName: \"kubernetes.io/projected/0a1aea96-4bc5-4809-bd77-3d7b319f274a-kube-api-access-twf9h\") pod \"0a1aea96-4bc5-4809-bd77-3d7b319f274a\" (UID: \"0a1aea96-4bc5-4809-bd77-3d7b319f274a\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.277353 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbnfk\" (UniqueName: \"kubernetes.io/projected/d0bd3943-fd52-4f19-8d60-b3e7446de42e-kube-api-access-nbnfk\") pod \"d0bd3943-fd52-4f19-8d60-b3e7446de42e\" (UID: \"d0bd3943-fd52-4f19-8d60-b3e7446de42e\") " Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.277998 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nr5j\" (UniqueName: \"kubernetes.io/projected/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33-kube-api-access-5nr5j\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.278016 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2844562f-7d2e-435f-9bf1-58fe118e3345-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.278027 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dlhf\" (UniqueName: \"kubernetes.io/projected/a244c95f-624e-4dca-833a-f290dd3c4465-kube-api-access-6dlhf\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.278040 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmzvl\" (UniqueName: \"kubernetes.io/projected/124a1c8a-df45-4295-92cc-cb1708dcd2dc-kube-api-access-mmzvl\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.278052 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5h9h\" (UniqueName: \"kubernetes.io/projected/2844562f-7d2e-435f-9bf1-58fe118e3345-kube-api-access-t5h9h\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.278062 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fx6w\" (UniqueName: \"kubernetes.io/projected/065111e5-7fbf-4d19-b5b6-73fab236781b-kube-api-access-7fx6w\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.280240 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a1aea96-4bc5-4809-bd77-3d7b319f274a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a1aea96-4bc5-4809-bd77-3d7b319f274a" (UID: "0a1aea96-4bc5-4809-bd77-3d7b319f274a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.280742 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0bd3943-fd52-4f19-8d60-b3e7446de42e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0bd3943-fd52-4f19-8d60-b3e7446de42e" (UID: "d0bd3943-fd52-4f19-8d60-b3e7446de42e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.283487 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f60e00-645d-465c-a973-55f9c9a1f2c1-kube-api-access-8wwwn" (OuterVolumeSpecName: "kube-api-access-8wwwn") pod "c8f60e00-645d-465c-a973-55f9c9a1f2c1" (UID: "c8f60e00-645d-465c-a973-55f9c9a1f2c1"). InnerVolumeSpecName "kube-api-access-8wwwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.283707 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f60e00-645d-465c-a973-55f9c9a1f2c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8f60e00-645d-465c-a973-55f9c9a1f2c1" (UID: "c8f60e00-645d-465c-a973-55f9c9a1f2c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.302181 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a1aea96-4bc5-4809-bd77-3d7b319f274a-kube-api-access-twf9h" (OuterVolumeSpecName: "kube-api-access-twf9h") pod "0a1aea96-4bc5-4809-bd77-3d7b319f274a" (UID: "0a1aea96-4bc5-4809-bd77-3d7b319f274a"). InnerVolumeSpecName "kube-api-access-twf9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.315204 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0bd3943-fd52-4f19-8d60-b3e7446de42e-kube-api-access-nbnfk" (OuterVolumeSpecName: "kube-api-access-nbnfk") pod "d0bd3943-fd52-4f19-8d60-b3e7446de42e" (UID: "d0bd3943-fd52-4f19-8d60-b3e7446de42e"). InnerVolumeSpecName "kube-api-access-nbnfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.383368 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twf9h\" (UniqueName: \"kubernetes.io/projected/0a1aea96-4bc5-4809-bd77-3d7b319f274a-kube-api-access-twf9h\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.383413 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbnfk\" (UniqueName: \"kubernetes.io/projected/d0bd3943-fd52-4f19-8d60-b3e7446de42e-kube-api-access-nbnfk\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.383424 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a1aea96-4bc5-4809-bd77-3d7b319f274a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.383433 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0bd3943-fd52-4f19-8d60-b3e7446de42e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.383442 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wwwn\" (UniqueName: \"kubernetes.io/projected/c8f60e00-645d-465c-a973-55f9c9a1f2c1-kube-api-access-8wwwn\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.383453 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8f60e00-645d-465c-a973-55f9c9a1f2c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.571084 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-91fa-account-create-update-2n55g" event={"ID":"d0bd3943-fd52-4f19-8d60-b3e7446de42e","Type":"ContainerDied","Data":"05f7119100f582e7c8def9f91298943415dc561cba8109418e22e743fa27813a"} Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.571120 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-91fa-account-create-update-2n55g" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.571124 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f7119100f582e7c8def9f91298943415dc561cba8109418e22e743fa27813a" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.572971 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-03da-account-create-update-pk298" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.572987 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-03da-account-create-update-pk298" event={"ID":"0c0706a6-8dac-4c9e-8d69-04e89e9e0c33","Type":"ContainerDied","Data":"f6ea5e82e242679a1eacdc39c9a6536dc4c8705125bc76b98e9fad9fa9cd56d6"} Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.573009 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6ea5e82e242679a1eacdc39c9a6536dc4c8705125bc76b98e9fad9fa9cd56d6" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.574339 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-335a-account-create-update-bflvb" event={"ID":"065111e5-7fbf-4d19-b5b6-73fab236781b","Type":"ContainerDied","Data":"6c69fde3788de5976304a014deabc10146aa1685c9a8dde92b3542cf64661619"} Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.574358 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c69fde3788de5976304a014deabc10146aa1685c9a8dde92b3542cf64661619" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.574409 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-335a-account-create-update-bflvb" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.582460 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-krcg6" event={"ID":"0a1aea96-4bc5-4809-bd77-3d7b319f274a","Type":"ContainerDied","Data":"a0e9e08df48c5390d0186435f6b899226d41cf23ac30a780b4f473d4a2b7ff50"} Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.582496 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e9e08df48c5390d0186435f6b899226d41cf23ac30a780b4f473d4a2b7ff50" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.582495 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-krcg6" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.584704 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-82klm" event={"ID":"124a1c8a-df45-4295-92cc-cb1708dcd2dc","Type":"ContainerDied","Data":"5ee2f24b5f76a52ba62b18b29847a0a0c2bf31204b3ed32bf54c56eeeef7a457"} Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.584739 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee2f24b5f76a52ba62b18b29847a0a0c2bf31204b3ed32bf54c56eeeef7a457" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.584717 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-82klm" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.588706 4803 generic.go:334] "Generic (PLEG): container finished" podID="4fafdbaa-01ec-42c3-afd2-5416c549677f" containerID="609f34d79c9a61cf6ef2b7e79f12b036b9a6e2413a7c5943daf48b422ddb8ce8" exitCode=0 Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.588766 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" event={"ID":"4fafdbaa-01ec-42c3-afd2-5416c549677f","Type":"ContainerDied","Data":"609f34d79c9a61cf6ef2b7e79f12b036b9a6e2413a7c5943daf48b422ddb8ce8"} Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.590662 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bxbff" event={"ID":"c8f60e00-645d-465c-a973-55f9c9a1f2c1","Type":"ContainerDied","Data":"6cc2f0a725dc7ca49c9099c959b7266452faf78f19c52adc906f93748589e1f8"} Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.590695 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc2f0a725dc7ca49c9099c959b7266452faf78f19c52adc906f93748589e1f8" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.590752 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bxbff" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.593381 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0efc-account-create-update-6nv5m" event={"ID":"a244c95f-624e-4dca-833a-f290dd3c4465","Type":"ContainerDied","Data":"7a8ff981a3c302917d0e6750c8d79d0ed13711ceb4b23dce448215c0d9a26f8b"} Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.593419 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8ff981a3c302917d0e6750c8d79d0ed13711ceb4b23dce448215c0d9a26f8b" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.593478 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0efc-account-create-update-6nv5m" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.596508 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-75867" event={"ID":"c533a2e0-2bd7-4ffd-8954-f83b562aa811","Type":"ContainerStarted","Data":"4d1ae031861d9850e3c39dddf8dcb91f3d5f2fed1da597c957ebe0614d2fc552"} Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.598819 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bgszm" event={"ID":"2844562f-7d2e-435f-9bf1-58fe118e3345","Type":"ContainerDied","Data":"aa804546c3583153f22052bdfdd77615eaba2d575d07895934446ad721230122"} Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.598861 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa804546c3583153f22052bdfdd77615eaba2d575d07895934446ad721230122" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.598905 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bgszm" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.624368 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-75867" podStartSLOduration=2.99244174 podStartE2EDuration="7.624344646s" podCreationTimestamp="2026-01-27 22:10:33 +0000 UTC" firstStartedPulling="2026-01-27 22:10:35.255285386 +0000 UTC m=+1387.671307105" lastFinishedPulling="2026-01-27 22:10:39.887188312 +0000 UTC m=+1392.303210011" observedRunningTime="2026-01-27 22:10:40.617270406 +0000 UTC m=+1393.033292115" watchObservedRunningTime="2026-01-27 22:10:40.624344646 +0000 UTC m=+1393.040366345" Jan 27 22:10:40 crc kubenswrapper[4803]: I0127 22:10:40.951055 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.000059 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-dns-svc\") pod \"4fafdbaa-01ec-42c3-afd2-5416c549677f\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.000133 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-sb\") pod \"4fafdbaa-01ec-42c3-afd2-5416c549677f\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.000288 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-config\") pod \"4fafdbaa-01ec-42c3-afd2-5416c549677f\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.000357 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-nb\") pod \"4fafdbaa-01ec-42c3-afd2-5416c549677f\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.000391 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78s8x\" (UniqueName: \"kubernetes.io/projected/4fafdbaa-01ec-42c3-afd2-5416c549677f-kube-api-access-78s8x\") pod \"4fafdbaa-01ec-42c3-afd2-5416c549677f\" (UID: \"4fafdbaa-01ec-42c3-afd2-5416c549677f\") " Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.028005 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fafdbaa-01ec-42c3-afd2-5416c549677f-kube-api-access-78s8x" (OuterVolumeSpecName: "kube-api-access-78s8x") pod "4fafdbaa-01ec-42c3-afd2-5416c549677f" (UID: "4fafdbaa-01ec-42c3-afd2-5416c549677f"). InnerVolumeSpecName "kube-api-access-78s8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.057152 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4fafdbaa-01ec-42c3-afd2-5416c549677f" (UID: "4fafdbaa-01ec-42c3-afd2-5416c549677f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.059504 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4fafdbaa-01ec-42c3-afd2-5416c549677f" (UID: "4fafdbaa-01ec-42c3-afd2-5416c549677f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.062503 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4fafdbaa-01ec-42c3-afd2-5416c549677f" (UID: "4fafdbaa-01ec-42c3-afd2-5416c549677f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.062549 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-config" (OuterVolumeSpecName: "config") pod "4fafdbaa-01ec-42c3-afd2-5416c549677f" (UID: "4fafdbaa-01ec-42c3-afd2-5416c549677f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.106687 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.106728 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78s8x\" (UniqueName: \"kubernetes.io/projected/4fafdbaa-01ec-42c3-afd2-5416c549677f-kube-api-access-78s8x\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.106747 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.106760 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.106771 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fafdbaa-01ec-42c3-afd2-5416c549677f-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.609251 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" event={"ID":"4fafdbaa-01ec-42c3-afd2-5416c549677f","Type":"ContainerDied","Data":"1d2d20c6c9231e641feb8d1c2148ef44003e1edfc43368a479185b81e205264c"} Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.609308 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pjgqn" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.609356 4803 scope.go:117] "RemoveContainer" containerID="609f34d79c9a61cf6ef2b7e79f12b036b9a6e2413a7c5943daf48b422ddb8ce8" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.637064 4803 scope.go:117] "RemoveContainer" containerID="93f4de8c6d35f9da04b8ff5eab3d3b670f3808cac5a50bcf1436e452ed54f499" Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.665398 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pjgqn"] Jan 27 22:10:41 crc kubenswrapper[4803]: I0127 22:10:41.675265 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pjgqn"] Jan 27 22:10:42 crc kubenswrapper[4803]: I0127 22:10:42.318015 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fafdbaa-01ec-42c3-afd2-5416c549677f" path="/var/lib/kubelet/pods/4fafdbaa-01ec-42c3-afd2-5416c549677f/volumes" Jan 27 22:10:45 crc kubenswrapper[4803]: I0127 22:10:45.651223 4803 generic.go:334] "Generic (PLEG): container finished" podID="c533a2e0-2bd7-4ffd-8954-f83b562aa811" containerID="4d1ae031861d9850e3c39dddf8dcb91f3d5f2fed1da597c957ebe0614d2fc552" exitCode=0 Jan 27 22:10:45 crc kubenswrapper[4803]: I0127 22:10:45.651311 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-75867" event={"ID":"c533a2e0-2bd7-4ffd-8954-f83b562aa811","Type":"ContainerDied","Data":"4d1ae031861d9850e3c39dddf8dcb91f3d5f2fed1da597c957ebe0614d2fc552"} Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.033594 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-75867" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.118904 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ckpd\" (UniqueName: \"kubernetes.io/projected/c533a2e0-2bd7-4ffd-8954-f83b562aa811-kube-api-access-2ckpd\") pod \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.118969 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-combined-ca-bundle\") pod \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.119040 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-config-data\") pod \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\" (UID: \"c533a2e0-2bd7-4ffd-8954-f83b562aa811\") " Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.124381 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c533a2e0-2bd7-4ffd-8954-f83b562aa811-kube-api-access-2ckpd" (OuterVolumeSpecName: "kube-api-access-2ckpd") pod "c533a2e0-2bd7-4ffd-8954-f83b562aa811" (UID: "c533a2e0-2bd7-4ffd-8954-f83b562aa811"). InnerVolumeSpecName "kube-api-access-2ckpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.146966 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c533a2e0-2bd7-4ffd-8954-f83b562aa811" (UID: "c533a2e0-2bd7-4ffd-8954-f83b562aa811"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.167734 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-config-data" (OuterVolumeSpecName: "config-data") pod "c533a2e0-2bd7-4ffd-8954-f83b562aa811" (UID: "c533a2e0-2bd7-4ffd-8954-f83b562aa811"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.221910 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ckpd\" (UniqueName: \"kubernetes.io/projected/c533a2e0-2bd7-4ffd-8954-f83b562aa811-kube-api-access-2ckpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.221938 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.221947 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c533a2e0-2bd7-4ffd-8954-f83b562aa811-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.673440 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-75867" event={"ID":"c533a2e0-2bd7-4ffd-8954-f83b562aa811","Type":"ContainerDied","Data":"828bc32f51ba70b1906151ebb4a281941fc2eb95b4680a7a23181227d4e4fd5d"} Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.673496 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="828bc32f51ba70b1906151ebb4a281941fc2eb95b4680a7a23181227d4e4fd5d" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.673518 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-75867" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.952525 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nfkng"] Jan 27 22:10:47 crc kubenswrapper[4803]: E0127 22:10:47.952988 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fafdbaa-01ec-42c3-afd2-5416c549677f" containerName="init" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953005 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fafdbaa-01ec-42c3-afd2-5416c549677f" containerName="init" Jan 27 22:10:47 crc kubenswrapper[4803]: E0127 22:10:47.953020 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0bd3943-fd52-4f19-8d60-b3e7446de42e" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953027 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0bd3943-fd52-4f19-8d60-b3e7446de42e" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: E0127 22:10:47.953040 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0706a6-8dac-4c9e-8d69-04e89e9e0c33" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953046 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0706a6-8dac-4c9e-8d69-04e89e9e0c33" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: E0127 22:10:47.953059 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a244c95f-624e-4dca-833a-f290dd3c4465" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953065 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a244c95f-624e-4dca-833a-f290dd3c4465" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: E0127 22:10:47.953072 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a1aea96-4bc5-4809-bd77-3d7b319f274a" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953077 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a1aea96-4bc5-4809-bd77-3d7b319f274a" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: E0127 22:10:47.953090 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124a1c8a-df45-4295-92cc-cb1708dcd2dc" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953095 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="124a1c8a-df45-4295-92cc-cb1708dcd2dc" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: E0127 22:10:47.953111 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065111e5-7fbf-4d19-b5b6-73fab236781b" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953116 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="065111e5-7fbf-4d19-b5b6-73fab236781b" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: E0127 22:10:47.953125 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fafdbaa-01ec-42c3-afd2-5416c549677f" containerName="dnsmasq-dns" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953131 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fafdbaa-01ec-42c3-afd2-5416c549677f" containerName="dnsmasq-dns" Jan 27 22:10:47 crc kubenswrapper[4803]: E0127 22:10:47.953149 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f60e00-645d-465c-a973-55f9c9a1f2c1" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953156 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f60e00-645d-465c-a973-55f9c9a1f2c1" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: E0127 22:10:47.953162 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2844562f-7d2e-435f-9bf1-58fe118e3345" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953169 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="2844562f-7d2e-435f-9bf1-58fe118e3345" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: E0127 22:10:47.953179 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c533a2e0-2bd7-4ffd-8954-f83b562aa811" containerName="keystone-db-sync" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953184 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c533a2e0-2bd7-4ffd-8954-f83b562aa811" containerName="keystone-db-sync" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953393 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f60e00-645d-465c-a973-55f9c9a1f2c1" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953411 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="124a1c8a-df45-4295-92cc-cb1708dcd2dc" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953423 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="2844562f-7d2e-435f-9bf1-58fe118e3345" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953430 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a1aea96-4bc5-4809-bd77-3d7b319f274a" containerName="mariadb-database-create" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953441 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a244c95f-624e-4dca-833a-f290dd3c4465" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953448 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0706a6-8dac-4c9e-8d69-04e89e9e0c33" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953459 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fafdbaa-01ec-42c3-afd2-5416c549677f" containerName="dnsmasq-dns" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953472 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="c533a2e0-2bd7-4ffd-8954-f83b562aa811" containerName="keystone-db-sync" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953487 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="065111e5-7fbf-4d19-b5b6-73fab236781b" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.953497 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0bd3943-fd52-4f19-8d60-b3e7446de42e" containerName="mariadb-account-create-update" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.954475 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.966629 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nfkng"] Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.979667 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cdf59"] Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.981733 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.985506 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.985589 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.985611 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.985669 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wcv24" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.985774 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 22:10:47 crc kubenswrapper[4803]: I0127 22:10:47.989911 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cdf59"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.042896 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-config-data\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.043006 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q57j9\" (UniqueName: \"kubernetes.io/projected/876179c9-330e-4456-a218-62c0b0eb2005-kube-api-access-q57j9\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.043052 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.043087 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-combined-ca-bundle\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.054547 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-scripts\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.054604 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-fernet-keys\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.054685 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.054711 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.054751 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-config\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.054791 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-credential-keys\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.054893 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tgcg\" (UniqueName: \"kubernetes.io/projected/bdead6de-9434-475c-b5ea-790d46196faf-kube-api-access-5tgcg\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.054922 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.143728 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-xmlbc"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.145099 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xmlbc" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.153572 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-n55fx" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.158352 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-scripts\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.168390 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-fernet-keys\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.168553 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.168634 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.168723 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-config\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.168866 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-credential-keys\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.169043 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tgcg\" (UniqueName: \"kubernetes.io/projected/bdead6de-9434-475c-b5ea-790d46196faf-kube-api-access-5tgcg\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.169089 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xmlbc"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.169175 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.169278 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-config-data\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.169421 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q57j9\" (UniqueName: \"kubernetes.io/projected/876179c9-330e-4456-a218-62c0b0eb2005-kube-api-access-q57j9\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.169500 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.170330 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.170226 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.154168 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.170656 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-config\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.170090 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.171070 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.170302 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-combined-ca-bundle\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.208674 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-fernet-keys\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.213240 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-scripts\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.226608 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tgcg\" (UniqueName: \"kubernetes.io/projected/bdead6de-9434-475c-b5ea-790d46196faf-kube-api-access-5tgcg\") pod \"dnsmasq-dns-bbf5cc879-nfkng\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.227337 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-config-data\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.236904 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-credential-keys\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.237950 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-combined-ca-bundle\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.241969 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q57j9\" (UniqueName: \"kubernetes.io/projected/876179c9-330e-4456-a218-62c0b0eb2005-kube-api-access-q57j9\") pod \"keystone-bootstrap-cdf59\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.258668 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-lsh9s"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.261136 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.264406 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w6ds7" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.265009 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.265315 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.272756 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-config-data\") pod \"heat-db-sync-xmlbc\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " pod="openstack/heat-db-sync-xmlbc" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.273016 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-combined-ca-bundle\") pod \"heat-db-sync-xmlbc\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " pod="openstack/heat-db-sync-xmlbc" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.273102 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tznx\" (UniqueName: \"kubernetes.io/projected/6c9761e2-3f55-4c05-be61-594fa9592844-kube-api-access-7tznx\") pod \"heat-db-sync-xmlbc\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " pod="openstack/heat-db-sync-xmlbc" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.275161 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.281351 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jdcs2"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.283022 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.307717 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.309439 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lsh9s"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.311779 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.311962 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.312062 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-25f27" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.375972 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82b4z\" (UniqueName: \"kubernetes.io/projected/d39e2273-cd2c-4e27-9890-39cf781c7508-kube-api-access-82b4z\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.376050 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-config-data\") pod \"heat-db-sync-xmlbc\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " pod="openstack/heat-db-sync-xmlbc" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.376074 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-combined-ca-bundle\") pod \"heat-db-sync-xmlbc\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " pod="openstack/heat-db-sync-xmlbc" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.376113 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tznx\" (UniqueName: \"kubernetes.io/projected/6c9761e2-3f55-4c05-be61-594fa9592844-kube-api-access-7tznx\") pod \"heat-db-sync-xmlbc\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " pod="openstack/heat-db-sync-xmlbc" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.376142 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-combined-ca-bundle\") pod \"neutron-db-sync-jdcs2\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.376216 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfkm7\" (UniqueName: \"kubernetes.io/projected/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-kube-api-access-zfkm7\") pod \"neutron-db-sync-jdcs2\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.376237 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d39e2273-cd2c-4e27-9890-39cf781c7508-etc-machine-id\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.376276 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-scripts\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.376291 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-db-sync-config-data\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.376328 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-config\") pod \"neutron-db-sync-jdcs2\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.376373 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-combined-ca-bundle\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.376405 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-config-data\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.389123 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-config-data\") pod \"heat-db-sync-xmlbc\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " pod="openstack/heat-db-sync-xmlbc" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.391431 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-combined-ca-bundle\") pod \"heat-db-sync-xmlbc\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " pod="openstack/heat-db-sync-xmlbc" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.416556 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jdcs2"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.450807 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ngppz"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.465259 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tznx\" (UniqueName: \"kubernetes.io/projected/6c9761e2-3f55-4c05-be61-594fa9592844-kube-api-access-7tznx\") pod \"heat-db-sync-xmlbc\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " pod="openstack/heat-db-sync-xmlbc" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.465971 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.467265 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.467510 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nngmh" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.473229 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.477953 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-vntfr"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.478742 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xmlbc" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.479316 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vntfr" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.479496 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82b4z\" (UniqueName: \"kubernetes.io/projected/d39e2273-cd2c-4e27-9890-39cf781c7508-kube-api-access-82b4z\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.479627 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-combined-ca-bundle\") pod \"neutron-db-sync-jdcs2\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.479728 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfkm7\" (UniqueName: \"kubernetes.io/projected/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-kube-api-access-zfkm7\") pod \"neutron-db-sync-jdcs2\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.479800 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d39e2273-cd2c-4e27-9890-39cf781c7508-etc-machine-id\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.479913 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-scripts\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.480011 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-db-sync-config-data\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.480097 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-config\") pod \"neutron-db-sync-jdcs2\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.480184 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-combined-ca-bundle\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.480280 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-config-data\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.483155 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d39e2273-cd2c-4e27-9890-39cf781c7508-etc-machine-id\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.483862 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.483964 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-49zkh" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.487788 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-combined-ca-bundle\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.492027 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-config\") pod \"neutron-db-sync-jdcs2\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.492400 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-config-data\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.495917 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-scripts\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.496821 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-combined-ca-bundle\") pod \"neutron-db-sync-jdcs2\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.496821 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-db-sync-config-data\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.520006 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82b4z\" (UniqueName: \"kubernetes.io/projected/d39e2273-cd2c-4e27-9890-39cf781c7508-kube-api-access-82b4z\") pod \"cinder-db-sync-lsh9s\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.543435 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfkm7\" (UniqueName: \"kubernetes.io/projected/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-kube-api-access-zfkm7\") pod \"neutron-db-sync-jdcs2\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.554216 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vntfr"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.586039 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfw5q\" (UniqueName: \"kubernetes.io/projected/3469063f-f2e9-46a9-bc44-bb35cf4b2149-kube-api-access-vfw5q\") pod \"barbican-db-sync-vntfr\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " pod="openstack/barbican-db-sync-vntfr" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.588818 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-combined-ca-bundle\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.590371 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-combined-ca-bundle\") pod \"barbican-db-sync-vntfr\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " pod="openstack/barbican-db-sync-vntfr" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.590606 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-config-data\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.590684 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc2sq\" (UniqueName: \"kubernetes.io/projected/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-kube-api-access-cc2sq\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.590867 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-logs\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.591052 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-scripts\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.591154 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-db-sync-config-data\") pod \"barbican-db-sync-vntfr\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " pod="openstack/barbican-db-sync-vntfr" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.589468 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ngppz"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.603988 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nfkng"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.629465 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b7894"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.631156 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.640504 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b7894"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708046 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-config\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708102 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-logs\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708162 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-scripts\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708190 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708208 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-db-sync-config-data\") pod \"barbican-db-sync-vntfr\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " pod="openstack/barbican-db-sync-vntfr" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708227 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfw5q\" (UniqueName: \"kubernetes.io/projected/3469063f-f2e9-46a9-bc44-bb35cf4b2149-kube-api-access-vfw5q\") pod \"barbican-db-sync-vntfr\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " pod="openstack/barbican-db-sync-vntfr" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708260 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708287 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708338 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708361 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn859\" (UniqueName: \"kubernetes.io/projected/ab4264b0-50c7-4427-8187-d7df34f01035-kube-api-access-fn859\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708415 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-combined-ca-bundle\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708434 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-combined-ca-bundle\") pod \"barbican-db-sync-vntfr\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " pod="openstack/barbican-db-sync-vntfr" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708493 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-config-data\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.708519 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc2sq\" (UniqueName: \"kubernetes.io/projected/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-kube-api-access-cc2sq\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.709193 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-logs\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.718057 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-scripts\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.719975 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-config-data\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.720553 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-combined-ca-bundle\") pod \"barbican-db-sync-vntfr\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " pod="openstack/barbican-db-sync-vntfr" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.722448 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-combined-ca-bundle\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.730431 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfw5q\" (UniqueName: \"kubernetes.io/projected/3469063f-f2e9-46a9-bc44-bb35cf4b2149-kube-api-access-vfw5q\") pod \"barbican-db-sync-vntfr\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " pod="openstack/barbican-db-sync-vntfr" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.730443 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-db-sync-config-data\") pod \"barbican-db-sync-vntfr\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " pod="openstack/barbican-db-sync-vntfr" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.741551 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc2sq\" (UniqueName: \"kubernetes.io/projected/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-kube-api-access-cc2sq\") pod \"placement-db-sync-ngppz\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.805814 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.810555 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.810622 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.810658 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.810701 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.810749 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn859\" (UniqueName: \"kubernetes.io/projected/ab4264b0-50c7-4427-8187-d7df34f01035-kube-api-access-fn859\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.810916 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-config\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.812288 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-config\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.814229 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.814924 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.815132 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.820258 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.829791 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.841082 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn859\" (UniqueName: \"kubernetes.io/projected/ab4264b0-50c7-4427-8187-d7df34f01035-kube-api-access-fn859\") pod \"dnsmasq-dns-56df8fb6b7-b7894\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.862924 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ngppz" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.950448 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.951773 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vntfr" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.956870 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.968473 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.969668 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.980135 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:48 crc kubenswrapper[4803]: I0127 22:10:48.993683 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.017340 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-log-httpd\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.017393 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-config-data\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.017412 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-scripts\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.017445 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.017486 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.017509 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-run-httpd\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.017528 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5bqb\" (UniqueName: \"kubernetes.io/projected/e867acab-94c1-404c-976b-c1af058a4a24-kube-api-access-j5bqb\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.119736 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-config-data\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.171990 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-scripts\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.172094 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.172155 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.172195 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-run-httpd\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.172225 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5bqb\" (UniqueName: \"kubernetes.io/projected/e867acab-94c1-404c-976b-c1af058a4a24-kube-api-access-j5bqb\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.172573 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-log-httpd\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.173136 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-log-httpd\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.149553 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-config-data\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.141889 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cdf59"] Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.175557 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-run-httpd\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.182329 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.182769 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-scripts\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.183595 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.187914 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.193952 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.201425 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.204206 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.204430 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.228275 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5kkmh" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.229195 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5bqb\" (UniqueName: \"kubernetes.io/projected/e867acab-94c1-404c-976b-c1af058a4a24-kube-api-access-j5bqb\") pod \"ceilometer-0\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.233253 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.249744 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.251533 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.254953 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.255174 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.265904 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.274372 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-config-data\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.274533 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-logs\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.274636 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.274723 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.274896 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.275077 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.275172 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-scripts\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.275307 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgl7t\" (UniqueName: \"kubernetes.io/projected/8756aadc-102a-48c8-9c23-464b1fe6ff77-kube-api-access-kgl7t\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.278191 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.298904 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nfkng"] Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379166 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379498 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379531 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-scripts\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379564 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379581 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-logs\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379625 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgl7t\" (UniqueName: \"kubernetes.io/projected/8756aadc-102a-48c8-9c23-464b1fe6ff77-kube-api-access-kgl7t\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379643 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379688 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379718 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-config-data\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379742 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-logs\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379774 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379809 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379829 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379853 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379894 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z85l5\" (UniqueName: \"kubernetes.io/projected/b4315e2c-5437-4ac7-af8d-fb9ed2298326-kube-api-access-z85l5\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.379948 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.381325 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-logs\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.381667 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.389406 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.396864 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.397125 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/82afd0a610ddd892574d89cd2a35286bd9ea734e30ae6ef371122711a69797f9/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.398146 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-config-data\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.398825 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-scripts\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.450954 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.457052 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgl7t\" (UniqueName: \"kubernetes.io/projected/8756aadc-102a-48c8-9c23-464b1fe6ff77-kube-api-access-kgl7t\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.481357 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.481806 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.481900 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-logs\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.481989 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.482066 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.482207 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.482309 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.482405 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z85l5\" (UniqueName: \"kubernetes.io/projected/b4315e2c-5437-4ac7-af8d-fb9ed2298326-kube-api-access-z85l5\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.486223 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.490197 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.490744 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-logs\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.495416 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.497082 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xmlbc"] Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.501554 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.505041 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.505083 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aca46170828d627b2eca91669573c71b0777b4758f25e31ae46ddbca6c8ecc63/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.517385 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z85l5\" (UniqueName: \"kubernetes.io/projected/b4315e2c-5437-4ac7-af8d-fb9ed2298326-kube-api-access-z85l5\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.519370 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.648712 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.650813 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.714230 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.722297 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.724039 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" event={"ID":"bdead6de-9434-475c-b5ea-790d46196faf","Type":"ContainerStarted","Data":"0d2902b13851e25ef8d277f74fc8cf5473316c1c29c289e8f1ea98cce456f951"} Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.735622 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cdf59" event={"ID":"876179c9-330e-4456-a218-62c0b0eb2005","Type":"ContainerStarted","Data":"3c7e5f1f6a436b9c49d22d3deb23fe42b87023f3289a025ef8b139d422be95b6"} Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.735674 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cdf59" event={"ID":"876179c9-330e-4456-a218-62c0b0eb2005","Type":"ContainerStarted","Data":"2da458d52402fb565adc5487dd1413cb562ce51854aa0ae16e7ddbbf7d3bdd94"} Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.736838 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xmlbc" event={"ID":"6c9761e2-3f55-4c05-be61-594fa9592844","Type":"ContainerStarted","Data":"909792f0f96917226d9142b33fbea9d8a3fc7817d5ba84855232edf4d935f56c"} Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.784651 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cdf59" podStartSLOduration=2.784634429 podStartE2EDuration="2.784634429s" podCreationTimestamp="2026-01-27 22:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:49.779329746 +0000 UTC m=+1402.195351445" watchObservedRunningTime="2026-01-27 22:10:49.784634429 +0000 UTC m=+1402.200656128" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.883430 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 22:10:49 crc kubenswrapper[4803]: I0127 22:10:49.905291 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.023349 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ngppz"] Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.082224 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jdcs2"] Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.117699 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b7894"] Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.144589 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lsh9s"] Jan 27 22:10:50 crc kubenswrapper[4803]: W0127 22:10:50.158929 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd39e2273_cd2c_4e27_9890_39cf781c7508.slice/crio-dcc1bcf45dc25f7ca00805686b91cdf524da3aba47e6e60533cd83474ffb944a WatchSource:0}: Error finding container dcc1bcf45dc25f7ca00805686b91cdf524da3aba47e6e60533cd83474ffb944a: Status 404 returned error can't find the container with id dcc1bcf45dc25f7ca00805686b91cdf524da3aba47e6e60533cd83474ffb944a Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.161880 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vntfr"] Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.367201 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:10:50 crc kubenswrapper[4803]: W0127 22:10:50.370594 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode867acab_94c1_404c_976b_c1af058a4a24.slice/crio-2f28c003f9f68c2e51b428f5e0c37eb7ca5b27d11eb875d0bce08bf7fcb04ab0 WatchSource:0}: Error finding container 2f28c003f9f68c2e51b428f5e0c37eb7ca5b27d11eb875d0bce08bf7fcb04ab0: Status 404 returned error can't find the container with id 2f28c003f9f68c2e51b428f5e0c37eb7ca5b27d11eb875d0bce08bf7fcb04ab0 Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.500522 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.581439 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.760603 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e867acab-94c1-404c-976b-c1af058a4a24","Type":"ContainerStarted","Data":"2f28c003f9f68c2e51b428f5e0c37eb7ca5b27d11eb875d0bce08bf7fcb04ab0"} Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.769126 4803 generic.go:334] "Generic (PLEG): container finished" podID="bdead6de-9434-475c-b5ea-790d46196faf" containerID="cd1010dd32e4ab7614213e3ce1c0242a0345fe64ca6ad0cbde6a7d531eb10d6e" exitCode=0 Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.769352 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" event={"ID":"bdead6de-9434-475c-b5ea-790d46196faf","Type":"ContainerDied","Data":"cd1010dd32e4ab7614213e3ce1c0242a0345fe64ca6ad0cbde6a7d531eb10d6e"} Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.785077 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lsh9s" event={"ID":"d39e2273-cd2c-4e27-9890-39cf781c7508","Type":"ContainerStarted","Data":"dcc1bcf45dc25f7ca00805686b91cdf524da3aba47e6e60533cd83474ffb944a"} Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.800867 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vntfr" event={"ID":"3469063f-f2e9-46a9-bc44-bb35cf4b2149","Type":"ContainerStarted","Data":"5c2cbf4d8273f9aed0ab462fe1125fc036e244670159d25d60bb53fa1612a41d"} Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.809737 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ngppz" event={"ID":"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca","Type":"ContainerStarted","Data":"ac76884c58d1e00a3380e7d90825206cb3dee3216e3eb0d39c32bface18e9c0e"} Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.832542 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jdcs2" event={"ID":"8cdc662a-87eb-4af4-916f-fe3746b4a1f0","Type":"ContainerStarted","Data":"12f500c3e88e10aa4f316d8bf4bc3541d87c21cdd5c55a8c87ddf0058e3b718b"} Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.832595 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jdcs2" event={"ID":"8cdc662a-87eb-4af4-916f-fe3746b4a1f0","Type":"ContainerStarted","Data":"26a19313b462697835cfb93331a24f2ed252d2a09d248cb91758d71d1cd8fe32"} Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.859976 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.882340 4803 generic.go:334] "Generic (PLEG): container finished" podID="ab4264b0-50c7-4427-8187-d7df34f01035" containerID="6d364fd9b52feeb520e2b8499b4afb4b1c3415c78128673fd628c61d935d63f5" exitCode=0 Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.882514 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" event={"ID":"ab4264b0-50c7-4427-8187-d7df34f01035","Type":"ContainerDied","Data":"6d364fd9b52feeb520e2b8499b4afb4b1c3415c78128673fd628c61d935d63f5"} Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.882550 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" event={"ID":"ab4264b0-50c7-4427-8187-d7df34f01035","Type":"ContainerStarted","Data":"27140fdfc309ad34fb3119070fe610e587e5997df2faf865bc15952dc32a2c21"} Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.890716 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.896561 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:10:50 crc kubenswrapper[4803]: I0127 22:10:50.901923 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jdcs2" podStartSLOduration=2.901904207 podStartE2EDuration="2.901904207s" podCreationTimestamp="2026-01-27 22:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:50.871027345 +0000 UTC m=+1403.287049044" watchObservedRunningTime="2026-01-27 22:10:50.901904207 +0000 UTC m=+1403.317925906" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.018525 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.472341 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.560477 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tgcg\" (UniqueName: \"kubernetes.io/projected/bdead6de-9434-475c-b5ea-790d46196faf-kube-api-access-5tgcg\") pod \"bdead6de-9434-475c-b5ea-790d46196faf\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.561379 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-sb\") pod \"bdead6de-9434-475c-b5ea-790d46196faf\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.561413 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-svc\") pod \"bdead6de-9434-475c-b5ea-790d46196faf\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.561565 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-nb\") pod \"bdead6de-9434-475c-b5ea-790d46196faf\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.561606 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-config\") pod \"bdead6de-9434-475c-b5ea-790d46196faf\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.561722 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-swift-storage-0\") pod \"bdead6de-9434-475c-b5ea-790d46196faf\" (UID: \"bdead6de-9434-475c-b5ea-790d46196faf\") " Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.586653 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdead6de-9434-475c-b5ea-790d46196faf-kube-api-access-5tgcg" (OuterVolumeSpecName: "kube-api-access-5tgcg") pod "bdead6de-9434-475c-b5ea-790d46196faf" (UID: "bdead6de-9434-475c-b5ea-790d46196faf"). InnerVolumeSpecName "kube-api-access-5tgcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.590694 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bdead6de-9434-475c-b5ea-790d46196faf" (UID: "bdead6de-9434-475c-b5ea-790d46196faf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.592330 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bdead6de-9434-475c-b5ea-790d46196faf" (UID: "bdead6de-9434-475c-b5ea-790d46196faf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.610606 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bdead6de-9434-475c-b5ea-790d46196faf" (UID: "bdead6de-9434-475c-b5ea-790d46196faf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.615574 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-config" (OuterVolumeSpecName: "config") pod "bdead6de-9434-475c-b5ea-790d46196faf" (UID: "bdead6de-9434-475c-b5ea-790d46196faf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.632120 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bdead6de-9434-475c-b5ea-790d46196faf" (UID: "bdead6de-9434-475c-b5ea-790d46196faf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.664246 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tgcg\" (UniqueName: \"kubernetes.io/projected/bdead6de-9434-475c-b5ea-790d46196faf-kube-api-access-5tgcg\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.664281 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.664295 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.664306 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.664316 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.664327 4803 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdead6de-9434-475c-b5ea-790d46196faf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.906349 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" event={"ID":"bdead6de-9434-475c-b5ea-790d46196faf","Type":"ContainerDied","Data":"0d2902b13851e25ef8d277f74fc8cf5473316c1c29c289e8f1ea98cce456f951"} Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.906395 4803 scope.go:117] "RemoveContainer" containerID="cd1010dd32e4ab7614213e3ce1c0242a0345fe64ca6ad0cbde6a7d531eb10d6e" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.906359 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-nfkng" Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.909999 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8756aadc-102a-48c8-9c23-464b1fe6ff77","Type":"ContainerStarted","Data":"cd25ba6b8b8095706fa30d2e762021dc9bb7f69ff330738affcb16a287294d3a"} Jan 27 22:10:51 crc kubenswrapper[4803]: I0127 22:10:51.918027 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4315e2c-5437-4ac7-af8d-fb9ed2298326","Type":"ContainerStarted","Data":"2f9ad14f92a8742458f826b85cfb4096e96be35f10e43f5dbce20b1117361ba4"} Jan 27 22:10:52 crc kubenswrapper[4803]: I0127 22:10:52.037944 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nfkng"] Jan 27 22:10:52 crc kubenswrapper[4803]: I0127 22:10:52.051369 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-nfkng"] Jan 27 22:10:52 crc kubenswrapper[4803]: I0127 22:10:52.350620 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdead6de-9434-475c-b5ea-790d46196faf" path="/var/lib/kubelet/pods/bdead6de-9434-475c-b5ea-790d46196faf/volumes" Jan 27 22:10:52 crc kubenswrapper[4803]: I0127 22:10:52.931108 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8756aadc-102a-48c8-9c23-464b1fe6ff77","Type":"ContainerStarted","Data":"d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e"} Jan 27 22:10:52 crc kubenswrapper[4803]: I0127 22:10:52.934276 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" event={"ID":"ab4264b0-50c7-4427-8187-d7df34f01035","Type":"ContainerStarted","Data":"63c39221727abd63a986b017692343b35a69c84a6f4a522b00223e48384012d0"} Jan 27 22:10:52 crc kubenswrapper[4803]: I0127 22:10:52.935631 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:52 crc kubenswrapper[4803]: I0127 22:10:52.939134 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4315e2c-5437-4ac7-af8d-fb9ed2298326","Type":"ContainerStarted","Data":"becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28"} Jan 27 22:10:52 crc kubenswrapper[4803]: I0127 22:10:52.961105 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" podStartSLOduration=4.961087156 podStartE2EDuration="4.961087156s" podCreationTimestamp="2026-01-27 22:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:52.952077853 +0000 UTC m=+1405.368099562" watchObservedRunningTime="2026-01-27 22:10:52.961087156 +0000 UTC m=+1405.377108855" Jan 27 22:10:53 crc kubenswrapper[4803]: I0127 22:10:53.956499 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8756aadc-102a-48c8-9c23-464b1fe6ff77" containerName="glance-log" containerID="cri-o://d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e" gracePeriod=30 Jan 27 22:10:53 crc kubenswrapper[4803]: I0127 22:10:53.957117 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8756aadc-102a-48c8-9c23-464b1fe6ff77","Type":"ContainerStarted","Data":"a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb"} Jan 27 22:10:53 crc kubenswrapper[4803]: I0127 22:10:53.957204 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8756aadc-102a-48c8-9c23-464b1fe6ff77" containerName="glance-httpd" containerID="cri-o://a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb" gracePeriod=30 Jan 27 22:10:53 crc kubenswrapper[4803]: I0127 22:10:53.962189 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4315e2c-5437-4ac7-af8d-fb9ed2298326","Type":"ContainerStarted","Data":"66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb"} Jan 27 22:10:53 crc kubenswrapper[4803]: I0127 22:10:53.962641 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b4315e2c-5437-4ac7-af8d-fb9ed2298326" containerName="glance-log" containerID="cri-o://becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28" gracePeriod=30 Jan 27 22:10:53 crc kubenswrapper[4803]: I0127 22:10:53.962741 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b4315e2c-5437-4ac7-af8d-fb9ed2298326" containerName="glance-httpd" containerID="cri-o://66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb" gracePeriod=30 Jan 27 22:10:53 crc kubenswrapper[4803]: I0127 22:10:53.989209 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.98918362 podStartE2EDuration="5.98918362s" podCreationTimestamp="2026-01-27 22:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:53.976819037 +0000 UTC m=+1406.392840746" watchObservedRunningTime="2026-01-27 22:10:53.98918362 +0000 UTC m=+1406.405205319" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.003651 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.003607528 podStartE2EDuration="6.003607528s" podCreationTimestamp="2026-01-27 22:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:10:53.999494118 +0000 UTC m=+1406.415515817" watchObservedRunningTime="2026-01-27 22:10:54.003607528 +0000 UTC m=+1406.419629227" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.731361 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.840863 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-scripts\") pod \"8756aadc-102a-48c8-9c23-464b1fe6ff77\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.840920 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-public-tls-certs\") pod \"8756aadc-102a-48c8-9c23-464b1fe6ff77\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.840941 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-logs\") pod \"8756aadc-102a-48c8-9c23-464b1fe6ff77\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.841068 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-httpd-run\") pod \"8756aadc-102a-48c8-9c23-464b1fe6ff77\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.841086 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-config-data\") pod \"8756aadc-102a-48c8-9c23-464b1fe6ff77\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.841119 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgl7t\" (UniqueName: \"kubernetes.io/projected/8756aadc-102a-48c8-9c23-464b1fe6ff77-kube-api-access-kgl7t\") pod \"8756aadc-102a-48c8-9c23-464b1fe6ff77\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.841394 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"8756aadc-102a-48c8-9c23-464b1fe6ff77\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.841439 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-combined-ca-bundle\") pod \"8756aadc-102a-48c8-9c23-464b1fe6ff77\" (UID: \"8756aadc-102a-48c8-9c23-464b1fe6ff77\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.842173 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8756aadc-102a-48c8-9c23-464b1fe6ff77" (UID: "8756aadc-102a-48c8-9c23-464b1fe6ff77"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.848913 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-logs" (OuterVolumeSpecName: "logs") pod "8756aadc-102a-48c8-9c23-464b1fe6ff77" (UID: "8756aadc-102a-48c8-9c23-464b1fe6ff77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.850641 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-scripts" (OuterVolumeSpecName: "scripts") pod "8756aadc-102a-48c8-9c23-464b1fe6ff77" (UID: "8756aadc-102a-48c8-9c23-464b1fe6ff77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.852498 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8756aadc-102a-48c8-9c23-464b1fe6ff77-kube-api-access-kgl7t" (OuterVolumeSpecName: "kube-api-access-kgl7t") pod "8756aadc-102a-48c8-9c23-464b1fe6ff77" (UID: "8756aadc-102a-48c8-9c23-464b1fe6ff77"). InnerVolumeSpecName "kube-api-access-kgl7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.867121 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6" (OuterVolumeSpecName: "glance") pod "8756aadc-102a-48c8-9c23-464b1fe6ff77" (UID: "8756aadc-102a-48c8-9c23-464b1fe6ff77"). InnerVolumeSpecName "pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.891308 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8756aadc-102a-48c8-9c23-464b1fe6ff77" (UID: "8756aadc-102a-48c8-9c23-464b1fe6ff77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.903026 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.920296 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8756aadc-102a-48c8-9c23-464b1fe6ff77" (UID: "8756aadc-102a-48c8-9c23-464b1fe6ff77"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.950104 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.950185 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-scripts\") pod \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.950254 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-httpd-run\") pod \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.950293 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-logs\") pod \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.950330 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-config-data\") pod \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.950352 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z85l5\" (UniqueName: \"kubernetes.io/projected/b4315e2c-5437-4ac7-af8d-fb9ed2298326-kube-api-access-z85l5\") pod \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.950426 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-combined-ca-bundle\") pod \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.950556 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-internal-tls-certs\") pod \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\" (UID: \"b4315e2c-5437-4ac7-af8d-fb9ed2298326\") " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.951584 4803 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") on node \"crc\" " Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.951616 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.951634 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.951646 4803 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.951657 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.951668 4803 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8756aadc-102a-48c8-9c23-464b1fe6ff77-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.951681 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgl7t\" (UniqueName: \"kubernetes.io/projected/8756aadc-102a-48c8-9c23-464b1fe6ff77-kube-api-access-kgl7t\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.952971 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-config-data" (OuterVolumeSpecName: "config-data") pod "8756aadc-102a-48c8-9c23-464b1fe6ff77" (UID: "8756aadc-102a-48c8-9c23-464b1fe6ff77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.953133 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-logs" (OuterVolumeSpecName: "logs") pod "b4315e2c-5437-4ac7-af8d-fb9ed2298326" (UID: "b4315e2c-5437-4ac7-af8d-fb9ed2298326"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.957160 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b4315e2c-5437-4ac7-af8d-fb9ed2298326" (UID: "b4315e2c-5437-4ac7-af8d-fb9ed2298326"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.957721 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-scripts" (OuterVolumeSpecName: "scripts") pod "b4315e2c-5437-4ac7-af8d-fb9ed2298326" (UID: "b4315e2c-5437-4ac7-af8d-fb9ed2298326"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.959234 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4315e2c-5437-4ac7-af8d-fb9ed2298326-kube-api-access-z85l5" (OuterVolumeSpecName: "kube-api-access-z85l5") pod "b4315e2c-5437-4ac7-af8d-fb9ed2298326" (UID: "b4315e2c-5437-4ac7-af8d-fb9ed2298326"). InnerVolumeSpecName "kube-api-access-z85l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.975074 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48" (OuterVolumeSpecName: "glance") pod "b4315e2c-5437-4ac7-af8d-fb9ed2298326" (UID: "b4315e2c-5437-4ac7-af8d-fb9ed2298326"). InnerVolumeSpecName "pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 22:10:54 crc kubenswrapper[4803]: I0127 22:10:54.985516 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4315e2c-5437-4ac7-af8d-fb9ed2298326" (UID: "b4315e2c-5437-4ac7-af8d-fb9ed2298326"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.000593 4803 generic.go:334] "Generic (PLEG): container finished" podID="b4315e2c-5437-4ac7-af8d-fb9ed2298326" containerID="66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb" exitCode=0 Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.000625 4803 generic.go:334] "Generic (PLEG): container finished" podID="b4315e2c-5437-4ac7-af8d-fb9ed2298326" containerID="becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28" exitCode=143 Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.000664 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4315e2c-5437-4ac7-af8d-fb9ed2298326","Type":"ContainerDied","Data":"66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb"} Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.000690 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4315e2c-5437-4ac7-af8d-fb9ed2298326","Type":"ContainerDied","Data":"becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28"} Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.000699 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4315e2c-5437-4ac7-af8d-fb9ed2298326","Type":"ContainerDied","Data":"2f9ad14f92a8742458f826b85cfb4096e96be35f10e43f5dbce20b1117361ba4"} Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.000713 4803 scope.go:117] "RemoveContainer" containerID="66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.000839 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.007461 4803 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.007639 4803 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6") on node "crc" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.012288 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b4315e2c-5437-4ac7-af8d-fb9ed2298326" (UID: "b4315e2c-5437-4ac7-af8d-fb9ed2298326"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.012315 4803 generic.go:334] "Generic (PLEG): container finished" podID="8756aadc-102a-48c8-9c23-464b1fe6ff77" containerID="a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb" exitCode=0 Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.012346 4803 generic.go:334] "Generic (PLEG): container finished" podID="8756aadc-102a-48c8-9c23-464b1fe6ff77" containerID="d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e" exitCode=143 Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.012396 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8756aadc-102a-48c8-9c23-464b1fe6ff77","Type":"ContainerDied","Data":"a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb"} Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.012422 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8756aadc-102a-48c8-9c23-464b1fe6ff77","Type":"ContainerDied","Data":"d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e"} Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.012432 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8756aadc-102a-48c8-9c23-464b1fe6ff77","Type":"ContainerDied","Data":"cd25ba6b8b8095706fa30d2e762021dc9bb7f69ff330738affcb16a287294d3a"} Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.012447 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.015789 4803 generic.go:334] "Generic (PLEG): container finished" podID="876179c9-330e-4456-a218-62c0b0eb2005" containerID="3c7e5f1f6a436b9c49d22d3deb23fe42b87023f3289a025ef8b139d422be95b6" exitCode=0 Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.015963 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cdf59" event={"ID":"876179c9-330e-4456-a218-62c0b0eb2005","Type":"ContainerDied","Data":"3c7e5f1f6a436b9c49d22d3deb23fe42b87023f3289a025ef8b139d422be95b6"} Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.034116 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-config-data" (OuterVolumeSpecName: "config-data") pod "b4315e2c-5437-4ac7-af8d-fb9ed2298326" (UID: "b4315e2c-5437-4ac7-af8d-fb9ed2298326"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.056902 4803 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.056943 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8756aadc-102a-48c8-9c23-464b1fe6ff77-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.057010 4803 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") on node \"crc\" " Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.057026 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.057037 4803 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.057047 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4315e2c-5437-4ac7-af8d-fb9ed2298326-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.075662 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.075681 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z85l5\" (UniqueName: \"kubernetes.io/projected/b4315e2c-5437-4ac7-af8d-fb9ed2298326-kube-api-access-z85l5\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.075696 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4315e2c-5437-4ac7-af8d-fb9ed2298326-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.075714 4803 reconciler_common.go:293] "Volume detached for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.088115 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.137211 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.154505 4803 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.154741 4803 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48") on node "crc" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.164077 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:10:55 crc kubenswrapper[4803]: E0127 22:10:55.164678 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4315e2c-5437-4ac7-af8d-fb9ed2298326" containerName="glance-httpd" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.164697 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4315e2c-5437-4ac7-af8d-fb9ed2298326" containerName="glance-httpd" Jan 27 22:10:55 crc kubenswrapper[4803]: E0127 22:10:55.164718 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdead6de-9434-475c-b5ea-790d46196faf" containerName="init" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.164725 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdead6de-9434-475c-b5ea-790d46196faf" containerName="init" Jan 27 22:10:55 crc kubenswrapper[4803]: E0127 22:10:55.164769 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8756aadc-102a-48c8-9c23-464b1fe6ff77" containerName="glance-log" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.164777 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8756aadc-102a-48c8-9c23-464b1fe6ff77" containerName="glance-log" Jan 27 22:10:55 crc kubenswrapper[4803]: E0127 22:10:55.164785 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8756aadc-102a-48c8-9c23-464b1fe6ff77" containerName="glance-httpd" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.164793 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8756aadc-102a-48c8-9c23-464b1fe6ff77" containerName="glance-httpd" Jan 27 22:10:55 crc kubenswrapper[4803]: E0127 22:10:55.164811 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4315e2c-5437-4ac7-af8d-fb9ed2298326" containerName="glance-log" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.164818 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4315e2c-5437-4ac7-af8d-fb9ed2298326" containerName="glance-log" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.165079 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdead6de-9434-475c-b5ea-790d46196faf" containerName="init" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.165096 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4315e2c-5437-4ac7-af8d-fb9ed2298326" containerName="glance-httpd" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.165119 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8756aadc-102a-48c8-9c23-464b1fe6ff77" containerName="glance-httpd" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.165131 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4315e2c-5437-4ac7-af8d-fb9ed2298326" containerName="glance-log" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.165149 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8756aadc-102a-48c8-9c23-464b1fe6ff77" containerName="glance-log" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.168377 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.170134 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.176828 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.179936 4803 reconciler_common.go:293] "Volume detached for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") on node \"crc\" DevicePath \"\"" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.193047 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.281815 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-scripts\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.282112 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.282222 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.282312 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-config-data\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.282434 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.282575 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8792v\" (UniqueName: \"kubernetes.io/projected/2568e2db-68d1-49fc-a0fd-363e983d8b97-kube-api-access-8792v\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.282690 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.282793 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-logs\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.338566 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.375932 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.384993 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-scripts\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.385070 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.385103 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.385120 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-config-data\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.385149 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.385222 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8792v\" (UniqueName: \"kubernetes.io/projected/2568e2db-68d1-49fc-a0fd-363e983d8b97-kube-api-access-8792v\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.385252 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.385272 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-logs\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.385665 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-logs\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.389340 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.397723 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-config-data\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.398029 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.400236 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-scripts\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.401516 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.407326 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.414625 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.415543 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.416688 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.417664 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.441612 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8792v\" (UniqueName: \"kubernetes.io/projected/2568e2db-68d1-49fc-a0fd-363e983d8b97-kube-api-access-8792v\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.479960 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.480019 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/82afd0a610ddd892574d89cd2a35286bd9ea734e30ae6ef371122711a69797f9/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.491006 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdndz\" (UniqueName: \"kubernetes.io/projected/8215d5aa-a30a-4a03-8058-509b5d04b261-kube-api-access-zdndz\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.491093 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.491137 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.491198 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.491261 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.491286 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.491307 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-logs\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.491323 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.593576 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.593669 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.593728 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.593756 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.593773 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-logs\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.593791 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.593828 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdndz\" (UniqueName: \"kubernetes.io/projected/8215d5aa-a30a-4a03-8058-509b5d04b261-kube-api-access-zdndz\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.593893 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.594416 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.595549 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-logs\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.602584 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.602588 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.603580 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.619764 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.624983 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdndz\" (UniqueName: \"kubernetes.io/projected/8215d5aa-a30a-4a03-8058-509b5d04b261-kube-api-access-zdndz\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.655128 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.655168 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aca46170828d627b2eca91669573c71b0777b4758f25e31ae46ddbca6c8ecc63/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.698818 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.702627 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.819313 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 22:10:55 crc kubenswrapper[4803]: I0127 22:10:55.850896 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 22:10:56 crc kubenswrapper[4803]: I0127 22:10:56.335494 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8756aadc-102a-48c8-9c23-464b1fe6ff77" path="/var/lib/kubelet/pods/8756aadc-102a-48c8-9c23-464b1fe6ff77/volumes" Jan 27 22:10:56 crc kubenswrapper[4803]: I0127 22:10:56.337465 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4315e2c-5437-4ac7-af8d-fb9ed2298326" path="/var/lib/kubelet/pods/b4315e2c-5437-4ac7-af8d-fb9ed2298326/volumes" Jan 27 22:10:58 crc kubenswrapper[4803]: I0127 22:10:58.981617 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:10:59 crc kubenswrapper[4803]: I0127 22:10:59.048367 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hgl6x"] Jan 27 22:10:59 crc kubenswrapper[4803]: I0127 22:10:59.048612 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" podUID="4f372552-a6b6-4446-ae72-d1a8370b514e" containerName="dnsmasq-dns" containerID="cri-o://0bb285dc2f8321c8967cdbae618d0f4e33222f38b4dea29384bc4cfe8babc946" gracePeriod=10 Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.092238 4803 generic.go:334] "Generic (PLEG): container finished" podID="4f372552-a6b6-4446-ae72-d1a8370b514e" containerID="0bb285dc2f8321c8967cdbae618d0f4e33222f38b4dea29384bc4cfe8babc946" exitCode=0 Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.092305 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" event={"ID":"4f372552-a6b6-4446-ae72-d1a8370b514e","Type":"ContainerDied","Data":"0bb285dc2f8321c8967cdbae618d0f4e33222f38b4dea29384bc4cfe8babc946"} Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.156829 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" podUID="4f372552-a6b6-4446-ae72-d1a8370b514e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.173:5353: connect: connection refused" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.293941 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.358619 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-config-data\") pod \"876179c9-330e-4456-a218-62c0b0eb2005\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.358744 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-fernet-keys\") pod \"876179c9-330e-4456-a218-62c0b0eb2005\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.358868 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-credential-keys\") pod \"876179c9-330e-4456-a218-62c0b0eb2005\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.358908 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q57j9\" (UniqueName: \"kubernetes.io/projected/876179c9-330e-4456-a218-62c0b0eb2005-kube-api-access-q57j9\") pod \"876179c9-330e-4456-a218-62c0b0eb2005\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.358936 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-scripts\") pod \"876179c9-330e-4456-a218-62c0b0eb2005\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.359010 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-combined-ca-bundle\") pod \"876179c9-330e-4456-a218-62c0b0eb2005\" (UID: \"876179c9-330e-4456-a218-62c0b0eb2005\") " Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.368745 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-scripts" (OuterVolumeSpecName: "scripts") pod "876179c9-330e-4456-a218-62c0b0eb2005" (UID: "876179c9-330e-4456-a218-62c0b0eb2005"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.376119 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/876179c9-330e-4456-a218-62c0b0eb2005-kube-api-access-q57j9" (OuterVolumeSpecName: "kube-api-access-q57j9") pod "876179c9-330e-4456-a218-62c0b0eb2005" (UID: "876179c9-330e-4456-a218-62c0b0eb2005"). InnerVolumeSpecName "kube-api-access-q57j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.384223 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "876179c9-330e-4456-a218-62c0b0eb2005" (UID: "876179c9-330e-4456-a218-62c0b0eb2005"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.385069 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "876179c9-330e-4456-a218-62c0b0eb2005" (UID: "876179c9-330e-4456-a218-62c0b0eb2005"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.432456 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "876179c9-330e-4456-a218-62c0b0eb2005" (UID: "876179c9-330e-4456-a218-62c0b0eb2005"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.444176 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-config-data" (OuterVolumeSpecName: "config-data") pod "876179c9-330e-4456-a218-62c0b0eb2005" (UID: "876179c9-330e-4456-a218-62c0b0eb2005"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.462391 4803 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.462440 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q57j9\" (UniqueName: \"kubernetes.io/projected/876179c9-330e-4456-a218-62c0b0eb2005-kube-api-access-q57j9\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.462455 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.462465 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.462473 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:00 crc kubenswrapper[4803]: I0127 22:11:00.462484 4803 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/876179c9-330e-4456-a218-62c0b0eb2005-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.106685 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cdf59" event={"ID":"876179c9-330e-4456-a218-62c0b0eb2005","Type":"ContainerDied","Data":"2da458d52402fb565adc5487dd1413cb562ce51854aa0ae16e7ddbbf7d3bdd94"} Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.106750 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da458d52402fb565adc5487dd1413cb562ce51854aa0ae16e7ddbbf7d3bdd94" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.106786 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cdf59" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.379389 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cdf59"] Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.387762 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cdf59"] Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.484576 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-df2jx"] Jan 27 22:11:01 crc kubenswrapper[4803]: E0127 22:11:01.485128 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876179c9-330e-4456-a218-62c0b0eb2005" containerName="keystone-bootstrap" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.485149 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="876179c9-330e-4456-a218-62c0b0eb2005" containerName="keystone-bootstrap" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.485341 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="876179c9-330e-4456-a218-62c0b0eb2005" containerName="keystone-bootstrap" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.486141 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.488663 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.488839 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.488970 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wcv24" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.491293 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.491416 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.496431 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-df2jx"] Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.592487 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8trpr\" (UniqueName: \"kubernetes.io/projected/c1309b4e-8ae9-4e41-ba61-1003d755c889-kube-api-access-8trpr\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.592534 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-fernet-keys\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.592584 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-scripts\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.592757 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-config-data\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.593229 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-credential-keys\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.593308 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-combined-ca-bundle\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.695163 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-credential-keys\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.695264 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-combined-ca-bundle\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.695907 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8trpr\" (UniqueName: \"kubernetes.io/projected/c1309b4e-8ae9-4e41-ba61-1003d755c889-kube-api-access-8trpr\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.695955 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-fernet-keys\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.695986 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-scripts\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.696037 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-config-data\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.699528 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-credential-keys\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.699533 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-scripts\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.699930 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-combined-ca-bundle\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.700084 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-config-data\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.701488 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-fernet-keys\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.712243 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8trpr\" (UniqueName: \"kubernetes.io/projected/c1309b4e-8ae9-4e41-ba61-1003d755c889-kube-api-access-8trpr\") pod \"keystone-bootstrap-df2jx\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:01 crc kubenswrapper[4803]: I0127 22:11:01.805322 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:02 crc kubenswrapper[4803]: I0127 22:11:02.320488 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="876179c9-330e-4456-a218-62c0b0eb2005" path="/var/lib/kubelet/pods/876179c9-330e-4456-a218-62c0b0eb2005/volumes" Jan 27 22:11:10 crc kubenswrapper[4803]: I0127 22:11:10.156982 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" podUID="4f372552-a6b6-4446-ae72-d1a8370b514e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.173:5353: i/o timeout" Jan 27 22:11:11 crc kubenswrapper[4803]: I0127 22:11:11.196659 4803 generic.go:334] "Generic (PLEG): container finished" podID="8cdc662a-87eb-4af4-916f-fe3746b4a1f0" containerID="12f500c3e88e10aa4f316d8bf4bc3541d87c21cdd5c55a8c87ddf0058e3b718b" exitCode=0 Jan 27 22:11:11 crc kubenswrapper[4803]: I0127 22:11:11.196947 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jdcs2" event={"ID":"8cdc662a-87eb-4af4-916f-fe3746b4a1f0","Type":"ContainerDied","Data":"12f500c3e88e10aa4f316d8bf4bc3541d87c21cdd5c55a8c87ddf0058e3b718b"} Jan 27 22:11:13 crc kubenswrapper[4803]: E0127 22:11:13.914416 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 27 22:11:13 crc kubenswrapper[4803]: E0127 22:11:13.915263 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfw5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-vntfr_openstack(3469063f-f2e9-46a9-bc44-bb35cf4b2149): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:11:13 crc kubenswrapper[4803]: E0127 22:11:13.918086 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-vntfr" podUID="3469063f-f2e9-46a9-bc44-bb35cf4b2149" Jan 27 22:11:13 crc kubenswrapper[4803]: I0127 22:11:13.925179 4803 scope.go:117] "RemoveContainer" containerID="becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.025102 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.113281 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-swift-storage-0\") pod \"4f372552-a6b6-4446-ae72-d1a8370b514e\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.113433 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-svc\") pod \"4f372552-a6b6-4446-ae72-d1a8370b514e\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.113459 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-config\") pod \"4f372552-a6b6-4446-ae72-d1a8370b514e\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.113506 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-sb\") pod \"4f372552-a6b6-4446-ae72-d1a8370b514e\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.113531 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-nb\") pod \"4f372552-a6b6-4446-ae72-d1a8370b514e\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.113590 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk942\" (UniqueName: \"kubernetes.io/projected/4f372552-a6b6-4446-ae72-d1a8370b514e-kube-api-access-sk942\") pod \"4f372552-a6b6-4446-ae72-d1a8370b514e\" (UID: \"4f372552-a6b6-4446-ae72-d1a8370b514e\") " Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.118701 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f372552-a6b6-4446-ae72-d1a8370b514e-kube-api-access-sk942" (OuterVolumeSpecName: "kube-api-access-sk942") pod "4f372552-a6b6-4446-ae72-d1a8370b514e" (UID: "4f372552-a6b6-4446-ae72-d1a8370b514e"). InnerVolumeSpecName "kube-api-access-sk942". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.167051 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-config" (OuterVolumeSpecName: "config") pod "4f372552-a6b6-4446-ae72-d1a8370b514e" (UID: "4f372552-a6b6-4446-ae72-d1a8370b514e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.168510 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f372552-a6b6-4446-ae72-d1a8370b514e" (UID: "4f372552-a6b6-4446-ae72-d1a8370b514e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.175181 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f372552-a6b6-4446-ae72-d1a8370b514e" (UID: "4f372552-a6b6-4446-ae72-d1a8370b514e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.182930 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4f372552-a6b6-4446-ae72-d1a8370b514e" (UID: "4f372552-a6b6-4446-ae72-d1a8370b514e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.208180 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4f372552-a6b6-4446-ae72-d1a8370b514e" (UID: "4f372552-a6b6-4446-ae72-d1a8370b514e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.215607 4803 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.215648 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.215659 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.215668 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.215678 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f372552-a6b6-4446-ae72-d1a8370b514e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.215687 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk942\" (UniqueName: \"kubernetes.io/projected/4f372552-a6b6-4446-ae72-d1a8370b514e-kube-api-access-sk942\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.233527 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.234343 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" event={"ID":"4f372552-a6b6-4446-ae72-d1a8370b514e","Type":"ContainerDied","Data":"d3655a7fbb654c2ad537f048c489e3d26ee82135832af78f350968ebe882d805"} Jan 27 22:11:14 crc kubenswrapper[4803]: E0127 22:11:14.236720 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-vntfr" podUID="3469063f-f2e9-46a9-bc44-bb35cf4b2149" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.319935 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hgl6x"] Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.321659 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hgl6x"] Jan 27 22:11:14 crc kubenswrapper[4803]: E0127 22:11:14.474642 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 27 22:11:14 crc kubenswrapper[4803]: E0127 22:11:14.475014 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c5h5cbh564h656h68h97h7dh5dfh598h65h548h695hd6h577h578h68ch5h594h545h646h5f5h667h59fh85h67fh78h5bh95h657h8fh5c9h58dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5bqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e867acab-94c1-404c-976b-c1af058a4a24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.476296 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.624136 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-combined-ca-bundle\") pod \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.624183 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfkm7\" (UniqueName: \"kubernetes.io/projected/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-kube-api-access-zfkm7\") pod \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.624334 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-config\") pod \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\" (UID: \"8cdc662a-87eb-4af4-916f-fe3746b4a1f0\") " Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.629501 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-kube-api-access-zfkm7" (OuterVolumeSpecName: "kube-api-access-zfkm7") pod "8cdc662a-87eb-4af4-916f-fe3746b4a1f0" (UID: "8cdc662a-87eb-4af4-916f-fe3746b4a1f0"). InnerVolumeSpecName "kube-api-access-zfkm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.652374 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-config" (OuterVolumeSpecName: "config") pod "8cdc662a-87eb-4af4-916f-fe3746b4a1f0" (UID: "8cdc662a-87eb-4af4-916f-fe3746b4a1f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.652695 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cdc662a-87eb-4af4-916f-fe3746b4a1f0" (UID: "8cdc662a-87eb-4af4-916f-fe3746b4a1f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.726646 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.726947 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:14 crc kubenswrapper[4803]: I0127 22:11:14.727042 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfkm7\" (UniqueName: \"kubernetes.io/projected/8cdc662a-87eb-4af4-916f-fe3746b4a1f0-kube-api-access-zfkm7\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:14 crc kubenswrapper[4803]: E0127 22:11:14.762688 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 27 22:11:14 crc kubenswrapper[4803]: E0127 22:11:14.763086 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7tznx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-xmlbc_openstack(6c9761e2-3f55-4c05-be61-594fa9592844): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:11:14 crc kubenswrapper[4803]: E0127 22:11:14.764444 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-xmlbc" podUID="6c9761e2-3f55-4c05-be61-594fa9592844" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.158222 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-hgl6x" podUID="4f372552-a6b6-4446-ae72-d1a8370b514e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.173:5353: i/o timeout" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.245470 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jdcs2" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.245464 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jdcs2" event={"ID":"8cdc662a-87eb-4af4-916f-fe3746b4a1f0","Type":"ContainerDied","Data":"26a19313b462697835cfb93331a24f2ed252d2a09d248cb91758d71d1cd8fe32"} Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.245916 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26a19313b462697835cfb93331a24f2ed252d2a09d248cb91758d71d1cd8fe32" Jan 27 22:11:15 crc kubenswrapper[4803]: E0127 22:11:15.247240 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-xmlbc" podUID="6c9761e2-3f55-4c05-be61-594fa9592844" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.648860 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rvh2q"] Jan 27 22:11:15 crc kubenswrapper[4803]: E0127 22:11:15.649293 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f372552-a6b6-4446-ae72-d1a8370b514e" containerName="init" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.649309 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f372552-a6b6-4446-ae72-d1a8370b514e" containerName="init" Jan 27 22:11:15 crc kubenswrapper[4803]: E0127 22:11:15.649336 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f372552-a6b6-4446-ae72-d1a8370b514e" containerName="dnsmasq-dns" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.649343 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f372552-a6b6-4446-ae72-d1a8370b514e" containerName="dnsmasq-dns" Jan 27 22:11:15 crc kubenswrapper[4803]: E0127 22:11:15.649362 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cdc662a-87eb-4af4-916f-fe3746b4a1f0" containerName="neutron-db-sync" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.649370 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cdc662a-87eb-4af4-916f-fe3746b4a1f0" containerName="neutron-db-sync" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.649559 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f372552-a6b6-4446-ae72-d1a8370b514e" containerName="dnsmasq-dns" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.649575 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cdc662a-87eb-4af4-916f-fe3746b4a1f0" containerName="neutron-db-sync" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.650664 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.677677 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rvh2q"] Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.753996 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.754054 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.754093 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kl7v\" (UniqueName: \"kubernetes.io/projected/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-kube-api-access-9kl7v\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.754127 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.754452 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-config\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.754712 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-svc\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.797567 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-69fc44b874-lbwd9"] Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.802577 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.807817 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.807859 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-25f27" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.808547 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 22:11:15 crc kubenswrapper[4803]: E0127 22:11:15.814405 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 27 22:11:15 crc kubenswrapper[4803]: E0127 22:11:15.814817 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82b4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-lsh9s_openstack(d39e2273-cd2c-4e27-9890-39cf781c7508): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.815011 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 22:11:15 crc kubenswrapper[4803]: E0127 22:11:15.816266 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-lsh9s" podUID="d39e2273-cd2c-4e27-9890-39cf781c7508" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.828011 4803 scope.go:117] "RemoveContainer" containerID="66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb" Jan 27 22:11:15 crc kubenswrapper[4803]: E0127 22:11:15.828519 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb\": container with ID starting with 66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb not found: ID does not exist" containerID="66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.828553 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb"} err="failed to get container status \"66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb\": rpc error: code = NotFound desc = could not find container \"66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb\": container with ID starting with 66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb not found: ID does not exist" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.828579 4803 scope.go:117] "RemoveContainer" containerID="becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28" Jan 27 22:11:15 crc kubenswrapper[4803]: E0127 22:11:15.830961 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28\": container with ID starting with becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28 not found: ID does not exist" containerID="becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.831003 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28"} err="failed to get container status \"becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28\": rpc error: code = NotFound desc = could not find container \"becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28\": container with ID starting with becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28 not found: ID does not exist" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.831026 4803 scope.go:117] "RemoveContainer" containerID="66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.831466 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb"} err="failed to get container status \"66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb\": rpc error: code = NotFound desc = could not find container \"66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb\": container with ID starting with 66ae55d677ce9ace8953dcc71e384c930ba26acfab79371d1a24cbaed4db29cb not found: ID does not exist" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.831493 4803 scope.go:117] "RemoveContainer" containerID="becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.832084 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28"} err="failed to get container status \"becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28\": rpc error: code = NotFound desc = could not find container \"becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28\": container with ID starting with becf7a43c75ddadc5d88362c89856e3413f10a864e303a978a8711924c8cfc28 not found: ID does not exist" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.832136 4803 scope.go:117] "RemoveContainer" containerID="a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.832266 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69fc44b874-lbwd9"] Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.858885 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.858928 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.858963 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kl7v\" (UniqueName: \"kubernetes.io/projected/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-kube-api-access-9kl7v\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.858997 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.859079 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-config\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.859127 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-svc\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.859966 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.859984 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-svc\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.860957 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.861071 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.862707 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-config\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.880921 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kl7v\" (UniqueName: \"kubernetes.io/projected/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-kube-api-access-9kl7v\") pod \"dnsmasq-dns-6b7b667979-rvh2q\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.960904 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-ovndb-tls-certs\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.960974 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-httpd-config\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.961022 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzzz\" (UniqueName: \"kubernetes.io/projected/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-kube-api-access-zvzzz\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.961125 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-combined-ca-bundle\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.961145 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-config\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:15 crc kubenswrapper[4803]: I0127 22:11:15.986205 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.064620 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-httpd-config\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.064747 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzzz\" (UniqueName: \"kubernetes.io/projected/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-kube-api-access-zvzzz\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.065114 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-combined-ca-bundle\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.065192 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-config\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.065323 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-ovndb-tls-certs\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.104518 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-combined-ca-bundle\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.108471 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-ovndb-tls-certs\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.108524 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-config\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.109490 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzzz\" (UniqueName: \"kubernetes.io/projected/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-kube-api-access-zvzzz\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.118747 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-httpd-config\") pod \"neutron-69fc44b874-lbwd9\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:16 crc kubenswrapper[4803]: E0127 22:11:16.277791 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-lsh9s" podUID="d39e2273-cd2c-4e27-9890-39cf781c7508" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.306445 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.335843 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f372552-a6b6-4446-ae72-d1a8370b514e" path="/var/lib/kubelet/pods/4f372552-a6b6-4446-ae72-d1a8370b514e/volumes" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.350547 4803 scope.go:117] "RemoveContainer" containerID="d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.415710 4803 scope.go:117] "RemoveContainer" containerID="a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb" Jan 27 22:11:16 crc kubenswrapper[4803]: E0127 22:11:16.420595 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb\": container with ID starting with a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb not found: ID does not exist" containerID="a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.420948 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb"} err="failed to get container status \"a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb\": rpc error: code = NotFound desc = could not find container \"a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb\": container with ID starting with a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb not found: ID does not exist" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.421079 4803 scope.go:117] "RemoveContainer" containerID="d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e" Jan 27 22:11:16 crc kubenswrapper[4803]: E0127 22:11:16.421800 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e\": container with ID starting with d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e not found: ID does not exist" containerID="d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.422215 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e"} err="failed to get container status \"d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e\": rpc error: code = NotFound desc = could not find container \"d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e\": container with ID starting with d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e not found: ID does not exist" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.422237 4803 scope.go:117] "RemoveContainer" containerID="a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.422793 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb"} err="failed to get container status \"a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb\": rpc error: code = NotFound desc = could not find container \"a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb\": container with ID starting with a86670e1831f8b48c56d409d5881134114cabc60c6b91cef1312d294799d40cb not found: ID does not exist" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.422918 4803 scope.go:117] "RemoveContainer" containerID="d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.423606 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e"} err="failed to get container status \"d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e\": rpc error: code = NotFound desc = could not find container \"d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e\": container with ID starting with d73f1fdb89e6d925fddfacb8caef90c810c3396e2689d89d5e570045595a1a4e not found: ID does not exist" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.423637 4803 scope.go:117] "RemoveContainer" containerID="0bb285dc2f8321c8967cdbae618d0f4e33222f38b4dea29384bc4cfe8babc946" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.512467 4803 scope.go:117] "RemoveContainer" containerID="b2f70853bc245f6d4e2d7f66349934e5548fdba735cdc5e35572f5c864a5a7bc" Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.572225 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-df2jx"] Jan 27 22:11:16 crc kubenswrapper[4803]: W0127 22:11:16.580111 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1309b4e_8ae9_4e41_ba61_1003d755c889.slice/crio-de539608a4455a7294704a37787b5b99d301687f6883f236aaf9d14231e21a60 WatchSource:0}: Error finding container de539608a4455a7294704a37787b5b99d301687f6883f236aaf9d14231e21a60: Status 404 returned error can't find the container with id de539608a4455a7294704a37787b5b99d301687f6883f236aaf9d14231e21a60 Jan 27 22:11:16 crc kubenswrapper[4803]: W0127 22:11:16.649035 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2568e2db_68d1_49fc_a0fd_363e983d8b97.slice/crio-3fa3b196fce23bb819351d343db44a9cd1c296a8c0ba4756f257a21774339217 WatchSource:0}: Error finding container 3fa3b196fce23bb819351d343db44a9cd1c296a8c0ba4756f257a21774339217: Status 404 returned error can't find the container with id 3fa3b196fce23bb819351d343db44a9cd1c296a8c0ba4756f257a21774339217 Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.651791 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.781084 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rvh2q"] Jan 27 22:11:16 crc kubenswrapper[4803]: I0127 22:11:16.879326 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.073185 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69fc44b874-lbwd9"] Jan 27 22:11:17 crc kubenswrapper[4803]: W0127 22:11:17.244153 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1dfb047_0985_4a6f_955d_e5c4a4dff5ea.slice/crio-ca56329e39f9bf7ec809956ffea6158e0a250e7879bd232eb1ccfe092ad252e1 WatchSource:0}: Error finding container ca56329e39f9bf7ec809956ffea6158e0a250e7879bd232eb1ccfe092ad252e1: Status 404 returned error can't find the container with id ca56329e39f9bf7ec809956ffea6158e0a250e7879bd232eb1ccfe092ad252e1 Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.309592 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ngppz" event={"ID":"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca","Type":"ContainerStarted","Data":"f15531efad6f152f886a431d94056653f2ba603c0b35d376bb8d362002999af5"} Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.311777 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" event={"ID":"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c","Type":"ContainerStarted","Data":"14a47211dd76d7b8876f0289007dea018127628db4923357da5f0f78d90bd2cd"} Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.313436 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2568e2db-68d1-49fc-a0fd-363e983d8b97","Type":"ContainerStarted","Data":"3fa3b196fce23bb819351d343db44a9cd1c296a8c0ba4756f257a21774339217"} Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.332831 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ngppz" podStartSLOduration=4.9764535500000004 podStartE2EDuration="29.332808813s" podCreationTimestamp="2026-01-27 22:10:48 +0000 UTC" firstStartedPulling="2026-01-27 22:10:50.105896507 +0000 UTC m=+1402.521918206" lastFinishedPulling="2026-01-27 22:11:14.46225177 +0000 UTC m=+1426.878273469" observedRunningTime="2026-01-27 22:11:17.32232196 +0000 UTC m=+1429.738343679" watchObservedRunningTime="2026-01-27 22:11:17.332808813 +0000 UTC m=+1429.748830512" Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.336581 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fc44b874-lbwd9" event={"ID":"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea","Type":"ContainerStarted","Data":"ca56329e39f9bf7ec809956ffea6158e0a250e7879bd232eb1ccfe092ad252e1"} Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.346697 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-df2jx" event={"ID":"c1309b4e-8ae9-4e41-ba61-1003d755c889","Type":"ContainerStarted","Data":"df626e6c49acc2230001cea15abc0c70175171ca9ef46cb26823caa839335564"} Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.346771 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-df2jx" event={"ID":"c1309b4e-8ae9-4e41-ba61-1003d755c889","Type":"ContainerStarted","Data":"de539608a4455a7294704a37787b5b99d301687f6883f236aaf9d14231e21a60"} Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.348339 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8215d5aa-a30a-4a03-8058-509b5d04b261","Type":"ContainerStarted","Data":"e590bb33bb02e4b77055ab045232498ccfb752f65bcdd540b855a418438a6cee"} Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.363941 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-df2jx" podStartSLOduration=16.363923992 podStartE2EDuration="16.363923992s" podCreationTimestamp="2026-01-27 22:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:17.362435101 +0000 UTC m=+1429.778456800" watchObservedRunningTime="2026-01-27 22:11:17.363923992 +0000 UTC m=+1429.779945691" Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.988890 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6757ddbf5c-pprm6"] Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.991331 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:17 crc kubenswrapper[4803]: I0127 22:11:17.996615 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6757ddbf5c-pprm6"] Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.024260 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.024543 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.136763 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-public-tls-certs\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.136892 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-config\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.136977 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2p45\" (UniqueName: \"kubernetes.io/projected/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-kube-api-access-z2p45\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.137132 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-combined-ca-bundle\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.137192 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-internal-tls-certs\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.137210 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-ovndb-tls-certs\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.137226 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-httpd-config\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.240116 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-public-tls-certs\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.240202 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-config\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.240250 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2p45\" (UniqueName: \"kubernetes.io/projected/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-kube-api-access-z2p45\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.240352 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-combined-ca-bundle\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.240396 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-internal-tls-certs\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.240417 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-ovndb-tls-certs\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.240440 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-httpd-config\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.250836 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-config\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.251386 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-internal-tls-certs\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.252095 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-public-tls-certs\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.253275 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-httpd-config\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.256818 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-combined-ca-bundle\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.259061 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-ovndb-tls-certs\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.270036 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2p45\" (UniqueName: \"kubernetes.io/projected/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-kube-api-access-z2p45\") pod \"neutron-6757ddbf5c-pprm6\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.380897 4803 generic.go:334] "Generic (PLEG): container finished" podID="2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" containerID="359f66355a0df0762e8d92b57360fe5b05969ebdc795121a046c141f783e1cd3" exitCode=0 Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.380982 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" event={"ID":"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c","Type":"ContainerDied","Data":"359f66355a0df0762e8d92b57360fe5b05969ebdc795121a046c141f783e1cd3"} Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.394253 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e867acab-94c1-404c-976b-c1af058a4a24","Type":"ContainerStarted","Data":"660ed7114a0681ca3b2ad9e6c2672f582f547ac404aea0e2decc165328e70b73"} Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.402819 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fc44b874-lbwd9" event={"ID":"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea","Type":"ContainerStarted","Data":"724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709"} Jan 27 22:11:18 crc kubenswrapper[4803]: I0127 22:11:18.534548 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.281328 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6757ddbf5c-pprm6"] Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.450716 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6757ddbf5c-pprm6" event={"ID":"f4a1a8ca-af9c-47d3-82a6-1ce97b165924","Type":"ContainerStarted","Data":"c6439faa79cf9076957d3be899fa35ae6ca71a0e07b9ea7883aeba2887389ecc"} Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.459464 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fc44b874-lbwd9" event={"ID":"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea","Type":"ContainerStarted","Data":"d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886"} Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.460176 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.466892 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8215d5aa-a30a-4a03-8058-509b5d04b261","Type":"ContainerStarted","Data":"ef4cf056b8f9b84ecba3e9ad2a548f23e4339dc78dcb5d24c92c6a7502b9af85"} Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.484623 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" event={"ID":"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c","Type":"ContainerStarted","Data":"24ac39c76b0dea6fb0a0ce7aa891496d6b726a67b25ec2d1d71a2d8e1f5e25ea"} Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.484687 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.485989 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-69fc44b874-lbwd9" podStartSLOduration=4.485976494 podStartE2EDuration="4.485976494s" podCreationTimestamp="2026-01-27 22:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:19.484596118 +0000 UTC m=+1431.900617817" watchObservedRunningTime="2026-01-27 22:11:19.485976494 +0000 UTC m=+1431.901998193" Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.498805 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2568e2db-68d1-49fc-a0fd-363e983d8b97","Type":"ContainerStarted","Data":"9314ccf5202326dc651edcf0da21dcead4e773b9b60ade52d9912a9d7c50270c"} Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.498858 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2568e2db-68d1-49fc-a0fd-363e983d8b97","Type":"ContainerStarted","Data":"9a61044a78af284d7e7f3fe7776badd0c8ff2f8c2516d15226ffc4eaa2c4ec1b"} Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.523637 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" podStartSLOduration=4.5236164389999995 podStartE2EDuration="4.523616439s" podCreationTimestamp="2026-01-27 22:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:19.516062885 +0000 UTC m=+1431.932084584" watchObservedRunningTime="2026-01-27 22:11:19.523616439 +0000 UTC m=+1431.939638138" Jan 27 22:11:19 crc kubenswrapper[4803]: I0127 22:11:19.545816 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=24.545798337 podStartE2EDuration="24.545798337s" podCreationTimestamp="2026-01-27 22:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:19.535082108 +0000 UTC m=+1431.951103817" watchObservedRunningTime="2026-01-27 22:11:19.545798337 +0000 UTC m=+1431.961820026" Jan 27 22:11:20 crc kubenswrapper[4803]: I0127 22:11:20.545683 4803 generic.go:334] "Generic (PLEG): container finished" podID="17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca" containerID="f15531efad6f152f886a431d94056653f2ba603c0b35d376bb8d362002999af5" exitCode=0 Jan 27 22:11:20 crc kubenswrapper[4803]: I0127 22:11:20.546376 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ngppz" event={"ID":"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca","Type":"ContainerDied","Data":"f15531efad6f152f886a431d94056653f2ba603c0b35d376bb8d362002999af5"} Jan 27 22:11:20 crc kubenswrapper[4803]: I0127 22:11:20.548779 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6757ddbf5c-pprm6" event={"ID":"f4a1a8ca-af9c-47d3-82a6-1ce97b165924","Type":"ContainerStarted","Data":"69eec6eafa07ee07a80b49fcc5b45fd29e0818680f57443cc5152c5e9613a0e8"} Jan 27 22:11:20 crc kubenswrapper[4803]: I0127 22:11:20.548813 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6757ddbf5c-pprm6" event={"ID":"f4a1a8ca-af9c-47d3-82a6-1ce97b165924","Type":"ContainerStarted","Data":"78568fb3db5c74fd564077b566b910d6edb0c0f3c55607d46b0f159f38873b29"} Jan 27 22:11:20 crc kubenswrapper[4803]: I0127 22:11:20.550362 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:20 crc kubenswrapper[4803]: I0127 22:11:20.551839 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8215d5aa-a30a-4a03-8058-509b5d04b261","Type":"ContainerStarted","Data":"358f275c56eb86e806cfe67db4dc7828a1452e7f59367d2056a388ab1dbad289"} Jan 27 22:11:20 crc kubenswrapper[4803]: I0127 22:11:20.586814 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=25.586796368 podStartE2EDuration="25.586796368s" podCreationTimestamp="2026-01-27 22:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:20.582097922 +0000 UTC m=+1432.998119621" watchObservedRunningTime="2026-01-27 22:11:20.586796368 +0000 UTC m=+1433.002818067" Jan 27 22:11:20 crc kubenswrapper[4803]: I0127 22:11:20.606768 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6757ddbf5c-pprm6" podStartSLOduration=3.606747737 podStartE2EDuration="3.606747737s" podCreationTimestamp="2026-01-27 22:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:20.606104329 +0000 UTC m=+1433.022126018" watchObservedRunningTime="2026-01-27 22:11:20.606747737 +0000 UTC m=+1433.022769436" Jan 27 22:11:21 crc kubenswrapper[4803]: I0127 22:11:21.567661 4803 generic.go:334] "Generic (PLEG): container finished" podID="c1309b4e-8ae9-4e41-ba61-1003d755c889" containerID="df626e6c49acc2230001cea15abc0c70175171ca9ef46cb26823caa839335564" exitCode=0 Jan 27 22:11:21 crc kubenswrapper[4803]: I0127 22:11:21.567742 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-df2jx" event={"ID":"c1309b4e-8ae9-4e41-ba61-1003d755c889","Type":"ContainerDied","Data":"df626e6c49acc2230001cea15abc0c70175171ca9ef46cb26823caa839335564"} Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.819936 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.820584 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.820609 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.821557 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.851630 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.852179 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.852247 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.852266 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.879583 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.906951 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.908270 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.909202 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 22:11:25 crc kubenswrapper[4803]: I0127 22:11:25.989050 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:26 crc kubenswrapper[4803]: I0127 22:11:26.046133 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b7894"] Jan 27 22:11:26 crc kubenswrapper[4803]: I0127 22:11:26.046383 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" podUID="ab4264b0-50c7-4427-8187-d7df34f01035" containerName="dnsmasq-dns" containerID="cri-o://63c39221727abd63a986b017692343b35a69c84a6f4a522b00223e48384012d0" gracePeriod=10 Jan 27 22:11:26 crc kubenswrapper[4803]: I0127 22:11:26.618712 4803 generic.go:334] "Generic (PLEG): container finished" podID="ab4264b0-50c7-4427-8187-d7df34f01035" containerID="63c39221727abd63a986b017692343b35a69c84a6f4a522b00223e48384012d0" exitCode=0 Jan 27 22:11:26 crc kubenswrapper[4803]: I0127 22:11:26.621078 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" event={"ID":"ab4264b0-50c7-4427-8187-d7df34f01035","Type":"ContainerDied","Data":"63c39221727abd63a986b017692343b35a69c84a6f4a522b00223e48384012d0"} Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.055073 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.059274 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ngppz" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.150888 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-scripts\") pod \"c1309b4e-8ae9-4e41-ba61-1003d755c889\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.150987 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-config-data\") pod \"c1309b4e-8ae9-4e41-ba61-1003d755c889\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.151011 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8trpr\" (UniqueName: \"kubernetes.io/projected/c1309b4e-8ae9-4e41-ba61-1003d755c889-kube-api-access-8trpr\") pod \"c1309b4e-8ae9-4e41-ba61-1003d755c889\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.151050 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-credential-keys\") pod \"c1309b4e-8ae9-4e41-ba61-1003d755c889\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.151105 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-fernet-keys\") pod \"c1309b4e-8ae9-4e41-ba61-1003d755c889\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.151171 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-combined-ca-bundle\") pod \"c1309b4e-8ae9-4e41-ba61-1003d755c889\" (UID: \"c1309b4e-8ae9-4e41-ba61-1003d755c889\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.164953 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c1309b4e-8ae9-4e41-ba61-1003d755c889" (UID: "c1309b4e-8ae9-4e41-ba61-1003d755c889"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.168998 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-scripts" (OuterVolumeSpecName: "scripts") pod "c1309b4e-8ae9-4e41-ba61-1003d755c889" (UID: "c1309b4e-8ae9-4e41-ba61-1003d755c889"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.177990 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c1309b4e-8ae9-4e41-ba61-1003d755c889" (UID: "c1309b4e-8ae9-4e41-ba61-1003d755c889"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.200268 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1309b4e-8ae9-4e41-ba61-1003d755c889-kube-api-access-8trpr" (OuterVolumeSpecName: "kube-api-access-8trpr") pod "c1309b4e-8ae9-4e41-ba61-1003d755c889" (UID: "c1309b4e-8ae9-4e41-ba61-1003d755c889"). InnerVolumeSpecName "kube-api-access-8trpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.241980 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-config-data" (OuterVolumeSpecName: "config-data") pod "c1309b4e-8ae9-4e41-ba61-1003d755c889" (UID: "c1309b4e-8ae9-4e41-ba61-1003d755c889"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.253783 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-logs\") pod \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.253941 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc2sq\" (UniqueName: \"kubernetes.io/projected/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-kube-api-access-cc2sq\") pod \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.254056 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-scripts\") pod \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.254147 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-config-data\") pod \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.254256 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-combined-ca-bundle\") pod \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\" (UID: \"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.277992 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8trpr\" (UniqueName: \"kubernetes.io/projected/c1309b4e-8ae9-4e41-ba61-1003d755c889-kube-api-access-8trpr\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.278033 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.278045 4803 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.278063 4803 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.278072 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.285001 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-logs" (OuterVolumeSpecName: "logs") pod "17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca" (UID: "17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.287484 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-kube-api-access-cc2sq" (OuterVolumeSpecName: "kube-api-access-cc2sq") pod "17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca" (UID: "17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca"). InnerVolumeSpecName "kube-api-access-cc2sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.306365 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-scripts" (OuterVolumeSpecName: "scripts") pod "17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca" (UID: "17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.333066 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca" (UID: "17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.347164 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1309b4e-8ae9-4e41-ba61-1003d755c889" (UID: "c1309b4e-8ae9-4e41-ba61-1003d755c889"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.350821 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-config-data" (OuterVolumeSpecName: "config-data") pod "17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca" (UID: "17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.381546 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.381597 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc2sq\" (UniqueName: \"kubernetes.io/projected/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-kube-api-access-cc2sq\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.381615 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1309b4e-8ae9-4e41-ba61-1003d755c889-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.381629 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.381639 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.381650 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.394549 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.483285 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-swift-storage-0\") pod \"ab4264b0-50c7-4427-8187-d7df34f01035\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.483676 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-config\") pod \"ab4264b0-50c7-4427-8187-d7df34f01035\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.483769 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn859\" (UniqueName: \"kubernetes.io/projected/ab4264b0-50c7-4427-8187-d7df34f01035-kube-api-access-fn859\") pod \"ab4264b0-50c7-4427-8187-d7df34f01035\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.483839 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-nb\") pod \"ab4264b0-50c7-4427-8187-d7df34f01035\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.483886 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-svc\") pod \"ab4264b0-50c7-4427-8187-d7df34f01035\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.484113 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-sb\") pod \"ab4264b0-50c7-4427-8187-d7df34f01035\" (UID: \"ab4264b0-50c7-4427-8187-d7df34f01035\") " Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.499504 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4264b0-50c7-4427-8187-d7df34f01035-kube-api-access-fn859" (OuterVolumeSpecName: "kube-api-access-fn859") pod "ab4264b0-50c7-4427-8187-d7df34f01035" (UID: "ab4264b0-50c7-4427-8187-d7df34f01035"). InnerVolumeSpecName "kube-api-access-fn859". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.551560 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ab4264b0-50c7-4427-8187-d7df34f01035" (UID: "ab4264b0-50c7-4427-8187-d7df34f01035"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.563396 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab4264b0-50c7-4427-8187-d7df34f01035" (UID: "ab4264b0-50c7-4427-8187-d7df34f01035"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.565713 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab4264b0-50c7-4427-8187-d7df34f01035" (UID: "ab4264b0-50c7-4427-8187-d7df34f01035"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.577269 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab4264b0-50c7-4427-8187-d7df34f01035" (UID: "ab4264b0-50c7-4427-8187-d7df34f01035"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.577485 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-config" (OuterVolumeSpecName: "config") pod "ab4264b0-50c7-4427-8187-d7df34f01035" (UID: "ab4264b0-50c7-4427-8187-d7df34f01035"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.586631 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn859\" (UniqueName: \"kubernetes.io/projected/ab4264b0-50c7-4427-8187-d7df34f01035-kube-api-access-fn859\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.586666 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.586676 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.586685 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.586693 4803 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.586700 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4264b0-50c7-4427-8187-d7df34f01035-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.631005 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e867acab-94c1-404c-976b-c1af058a4a24","Type":"ContainerStarted","Data":"0eb9a45ca9fc457c7b2bd4580bb21b0a47dbcfe3480890bccbe6c4bb6d1f4212"} Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.632457 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ngppz" event={"ID":"17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca","Type":"ContainerDied","Data":"ac76884c58d1e00a3380e7d90825206cb3dee3216e3eb0d39c32bface18e9c0e"} Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.632496 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac76884c58d1e00a3380e7d90825206cb3dee3216e3eb0d39c32bface18e9c0e" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.632718 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ngppz" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.634110 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" event={"ID":"ab4264b0-50c7-4427-8187-d7df34f01035","Type":"ContainerDied","Data":"27140fdfc309ad34fb3119070fe610e587e5997df2faf865bc15952dc32a2c21"} Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.634140 4803 scope.go:117] "RemoveContainer" containerID="63c39221727abd63a986b017692343b35a69c84a6f4a522b00223e48384012d0" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.634158 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-b7894" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.638796 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-df2jx" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.638914 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-df2jx" event={"ID":"c1309b4e-8ae9-4e41-ba61-1003d755c889","Type":"ContainerDied","Data":"de539608a4455a7294704a37787b5b99d301687f6883f236aaf9d14231e21a60"} Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.638941 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de539608a4455a7294704a37787b5b99d301687f6883f236aaf9d14231e21a60" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.663392 4803 scope.go:117] "RemoveContainer" containerID="6d364fd9b52feeb520e2b8499b4afb4b1c3415c78128673fd628c61d935d63f5" Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.684896 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b7894"] Jan 27 22:11:27 crc kubenswrapper[4803]: I0127 22:11:27.692835 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b7894"] Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.176103 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-75677f8887-xwsk2"] Jan 27 22:11:28 crc kubenswrapper[4803]: E0127 22:11:28.176772 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4264b0-50c7-4427-8187-d7df34f01035" containerName="dnsmasq-dns" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.176786 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4264b0-50c7-4427-8187-d7df34f01035" containerName="dnsmasq-dns" Jan 27 22:11:28 crc kubenswrapper[4803]: E0127 22:11:28.176808 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4264b0-50c7-4427-8187-d7df34f01035" containerName="init" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.176813 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4264b0-50c7-4427-8187-d7df34f01035" containerName="init" Jan 27 22:11:28 crc kubenswrapper[4803]: E0127 22:11:28.176827 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1309b4e-8ae9-4e41-ba61-1003d755c889" containerName="keystone-bootstrap" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.176834 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1309b4e-8ae9-4e41-ba61-1003d755c889" containerName="keystone-bootstrap" Jan 27 22:11:28 crc kubenswrapper[4803]: E0127 22:11:28.177079 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca" containerName="placement-db-sync" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.177089 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca" containerName="placement-db-sync" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.177291 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca" containerName="placement-db-sync" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.177307 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab4264b0-50c7-4427-8187-d7df34f01035" containerName="dnsmasq-dns" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.177329 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1309b4e-8ae9-4e41-ba61-1003d755c889" containerName="keystone-bootstrap" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.178083 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.185590 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.186078 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.186145 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.186404 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.186978 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wcv24" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.189515 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.197122 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75677f8887-xwsk2"] Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.300001 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5b8df6b68b-dmsbm"] Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.301964 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.304566 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.304732 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.309919 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-config-data\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.310023 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp2js\" (UniqueName: \"kubernetes.io/projected/91aac3c2-75e7-4359-8d5f-96ddab2abae2-kube-api-access-xp2js\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.310084 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-fernet-keys\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.310114 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-credential-keys\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.310139 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-combined-ca-bundle\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.310182 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-internal-tls-certs\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.310210 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-scripts\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.310268 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-public-tls-certs\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.310446 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nngmh" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.315129 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.317031 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.319728 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab4264b0-50c7-4427-8187-d7df34f01035" path="/var/lib/kubelet/pods/ab4264b0-50c7-4427-8187-d7df34f01035/volumes" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.320346 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b8df6b68b-dmsbm"] Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.412392 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-internal-tls-certs\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.412456 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-fernet-keys\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.412488 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-credential-keys\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.412518 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-combined-ca-bundle\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.412539 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvr8p\" (UniqueName: \"kubernetes.io/projected/71b5940f-523d-4fce-b807-5db4fc97336d-kube-api-access-dvr8p\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.412580 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-internal-tls-certs\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.412631 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-scripts\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.412650 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-scripts\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.412828 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-public-tls-certs\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.412902 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-public-tls-certs\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.413092 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-config-data\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.413153 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-combined-ca-bundle\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.413257 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp2js\" (UniqueName: \"kubernetes.io/projected/91aac3c2-75e7-4359-8d5f-96ddab2abae2-kube-api-access-xp2js\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.413285 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-config-data\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.413355 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71b5940f-523d-4fce-b807-5db4fc97336d-logs\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.417022 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.417110 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.417036 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.417223 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.417347 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.424478 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-combined-ca-bundle\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.427653 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-public-tls-certs\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.428624 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-scripts\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.428712 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-internal-tls-certs\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.429114 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-credential-keys\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.430297 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-fernet-keys\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.431496 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91aac3c2-75e7-4359-8d5f-96ddab2abae2-config-data\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.434523 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp2js\" (UniqueName: \"kubernetes.io/projected/91aac3c2-75e7-4359-8d5f-96ddab2abae2-kube-api-access-xp2js\") pod \"keystone-75677f8887-xwsk2\" (UID: \"91aac3c2-75e7-4359-8d5f-96ddab2abae2\") " pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.502334 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wcv24" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.511637 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.516043 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-scripts\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.516204 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-public-tls-certs\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.516563 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-combined-ca-bundle\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.516677 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-config-data\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.516760 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71b5940f-523d-4fce-b807-5db4fc97336d-logs\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.516828 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-internal-tls-certs\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.517020 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvr8p\" (UniqueName: \"kubernetes.io/projected/71b5940f-523d-4fce-b807-5db4fc97336d-kube-api-access-dvr8p\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.517668 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71b5940f-523d-4fce-b807-5db4fc97336d-logs\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.521648 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-scripts\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.526600 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-public-tls-certs\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.532937 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-config-data\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.533304 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-internal-tls-certs\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.533726 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b5940f-523d-4fce-b807-5db4fc97336d-combined-ca-bundle\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.545539 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvr8p\" (UniqueName: \"kubernetes.io/projected/71b5940f-523d-4fce-b807-5db4fc97336d-kube-api-access-dvr8p\") pod \"placement-5b8df6b68b-dmsbm\" (UID: \"71b5940f-523d-4fce-b807-5db4fc97336d\") " pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:28 crc kubenswrapper[4803]: I0127 22:11:28.621453 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:29 crc kubenswrapper[4803]: I0127 22:11:29.078002 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75677f8887-xwsk2"] Jan 27 22:11:29 crc kubenswrapper[4803]: I0127 22:11:29.276212 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b8df6b68b-dmsbm"] Jan 27 22:11:29 crc kubenswrapper[4803]: I0127 22:11:29.690226 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75677f8887-xwsk2" event={"ID":"91aac3c2-75e7-4359-8d5f-96ddab2abae2","Type":"ContainerStarted","Data":"5a23b43505c1eb6798312154d30c309133a7e4d11dacc7132ef69c39dc0c739b"} Jan 27 22:11:29 crc kubenswrapper[4803]: I0127 22:11:29.690634 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:11:29 crc kubenswrapper[4803]: I0127 22:11:29.690652 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75677f8887-xwsk2" event={"ID":"91aac3c2-75e7-4359-8d5f-96ddab2abae2","Type":"ContainerStarted","Data":"0180c0f7d7d8c353cb678dff5a9df9481d68db50ffc98a75d73fd0e81683d168"} Jan 27 22:11:29 crc kubenswrapper[4803]: I0127 22:11:29.692213 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b8df6b68b-dmsbm" event={"ID":"71b5940f-523d-4fce-b807-5db4fc97336d","Type":"ContainerStarted","Data":"47b30a4a84778de086e90b188a11e6061b37e57f168c06b5ceada619720f8a41"} Jan 27 22:11:29 crc kubenswrapper[4803]: I0127 22:11:29.692252 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b8df6b68b-dmsbm" event={"ID":"71b5940f-523d-4fce-b807-5db4fc97336d","Type":"ContainerStarted","Data":"57b7af51b8e23681ccc60ccc7f0cc8df90f182348c1f598c23ec6baf83bbafc6"} Jan 27 22:11:30 crc kubenswrapper[4803]: I0127 22:11:30.350694 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-75677f8887-xwsk2" podStartSLOduration=2.350674847 podStartE2EDuration="2.350674847s" podCreationTimestamp="2026-01-27 22:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:29.712042998 +0000 UTC m=+1442.128064697" watchObservedRunningTime="2026-01-27 22:11:30.350674847 +0000 UTC m=+1442.766696546" Jan 27 22:11:30 crc kubenswrapper[4803]: I0127 22:11:30.703964 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vntfr" event={"ID":"3469063f-f2e9-46a9-bc44-bb35cf4b2149","Type":"ContainerStarted","Data":"5dd55ed1ac295f9ed4bba4166a56f138e235ff67a9faa616fbfc4e7b7718ada8"} Jan 27 22:11:30 crc kubenswrapper[4803]: I0127 22:11:30.706025 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b8df6b68b-dmsbm" event={"ID":"71b5940f-523d-4fce-b807-5db4fc97336d","Type":"ContainerStarted","Data":"5c74cc85025d32c3a77fd87f32bacebc4689c2d873d7e06be621dcb66acc18b0"} Jan 27 22:11:30 crc kubenswrapper[4803]: I0127 22:11:30.719801 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-vntfr" podStartSLOduration=2.740068035 podStartE2EDuration="42.719785784s" podCreationTimestamp="2026-01-27 22:10:48 +0000 UTC" firstStartedPulling="2026-01-27 22:10:50.158228266 +0000 UTC m=+1402.574249975" lastFinishedPulling="2026-01-27 22:11:30.137946025 +0000 UTC m=+1442.553967724" observedRunningTime="2026-01-27 22:11:30.717237125 +0000 UTC m=+1443.133258824" watchObservedRunningTime="2026-01-27 22:11:30.719785784 +0000 UTC m=+1443.135807483" Jan 27 22:11:30 crc kubenswrapper[4803]: I0127 22:11:30.721974 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 22:11:30 crc kubenswrapper[4803]: I0127 22:11:30.722093 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 22:11:30 crc kubenswrapper[4803]: I0127 22:11:30.726354 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 22:11:30 crc kubenswrapper[4803]: I0127 22:11:30.727564 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 22:11:30 crc kubenswrapper[4803]: I0127 22:11:30.727633 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 22:11:30 crc kubenswrapper[4803]: I0127 22:11:30.781724 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5b8df6b68b-dmsbm" podStartSLOduration=2.781695602 podStartE2EDuration="2.781695602s" podCreationTimestamp="2026-01-27 22:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:30.751685993 +0000 UTC m=+1443.167707722" watchObservedRunningTime="2026-01-27 22:11:30.781695602 +0000 UTC m=+1443.197717291" Jan 27 22:11:30 crc kubenswrapper[4803]: I0127 22:11:30.824999 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 22:11:31 crc kubenswrapper[4803]: I0127 22:11:31.723827 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xmlbc" event={"ID":"6c9761e2-3f55-4c05-be61-594fa9592844","Type":"ContainerStarted","Data":"744dc1266933be12f4db531dc2df53a58237c8a4f13be5a6231dd0ebbc2d4974"} Jan 27 22:11:31 crc kubenswrapper[4803]: I0127 22:11:31.724477 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:31 crc kubenswrapper[4803]: I0127 22:11:31.724525 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:11:31 crc kubenswrapper[4803]: I0127 22:11:31.760162 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-xmlbc" podStartSLOduration=2.326500052 podStartE2EDuration="43.760145068s" podCreationTimestamp="2026-01-27 22:10:48 +0000 UTC" firstStartedPulling="2026-01-27 22:10:49.560084059 +0000 UTC m=+1401.976105758" lastFinishedPulling="2026-01-27 22:11:30.993729075 +0000 UTC m=+1443.409750774" observedRunningTime="2026-01-27 22:11:31.75167148 +0000 UTC m=+1444.167693189" watchObservedRunningTime="2026-01-27 22:11:31.760145068 +0000 UTC m=+1444.176166767" Jan 27 22:11:32 crc kubenswrapper[4803]: I0127 22:11:32.737204 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lsh9s" event={"ID":"d39e2273-cd2c-4e27-9890-39cf781c7508","Type":"ContainerStarted","Data":"c96c9e346836fe66124e1aa99deb802706dfb5b2e575bd2038955c360ff6ef4d"} Jan 27 22:11:32 crc kubenswrapper[4803]: I0127 22:11:32.764904 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-lsh9s" podStartSLOduration=3.00709515 podStartE2EDuration="44.764884413s" podCreationTimestamp="2026-01-27 22:10:48 +0000 UTC" firstStartedPulling="2026-01-27 22:10:50.183198419 +0000 UTC m=+1402.599220118" lastFinishedPulling="2026-01-27 22:11:31.940987682 +0000 UTC m=+1444.357009381" observedRunningTime="2026-01-27 22:11:32.756447346 +0000 UTC m=+1445.172469065" watchObservedRunningTime="2026-01-27 22:11:32.764884413 +0000 UTC m=+1445.180906112" Jan 27 22:11:33 crc kubenswrapper[4803]: I0127 22:11:33.750297 4803 generic.go:334] "Generic (PLEG): container finished" podID="3469063f-f2e9-46a9-bc44-bb35cf4b2149" containerID="5dd55ed1ac295f9ed4bba4166a56f138e235ff67a9faa616fbfc4e7b7718ada8" exitCode=0 Jan 27 22:11:33 crc kubenswrapper[4803]: I0127 22:11:33.751331 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vntfr" event={"ID":"3469063f-f2e9-46a9-bc44-bb35cf4b2149","Type":"ContainerDied","Data":"5dd55ed1ac295f9ed4bba4166a56f138e235ff67a9faa616fbfc4e7b7718ada8"} Jan 27 22:11:35 crc kubenswrapper[4803]: I0127 22:11:35.780861 4803 generic.go:334] "Generic (PLEG): container finished" podID="6c9761e2-3f55-4c05-be61-594fa9592844" containerID="744dc1266933be12f4db531dc2df53a58237c8a4f13be5a6231dd0ebbc2d4974" exitCode=0 Jan 27 22:11:35 crc kubenswrapper[4803]: I0127 22:11:35.781350 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xmlbc" event={"ID":"6c9761e2-3f55-4c05-be61-594fa9592844","Type":"ContainerDied","Data":"744dc1266933be12f4db531dc2df53a58237c8a4f13be5a6231dd0ebbc2d4974"} Jan 27 22:11:36 crc kubenswrapper[4803]: I0127 22:11:36.801309 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vntfr" event={"ID":"3469063f-f2e9-46a9-bc44-bb35cf4b2149","Type":"ContainerDied","Data":"5c2cbf4d8273f9aed0ab462fe1125fc036e244670159d25d60bb53fa1612a41d"} Jan 27 22:11:36 crc kubenswrapper[4803]: I0127 22:11:36.801708 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c2cbf4d8273f9aed0ab462fe1125fc036e244670159d25d60bb53fa1612a41d" Jan 27 22:11:36 crc kubenswrapper[4803]: I0127 22:11:36.903057 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vntfr" Jan 27 22:11:36 crc kubenswrapper[4803]: I0127 22:11:36.981870 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-combined-ca-bundle\") pod \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " Jan 27 22:11:36 crc kubenswrapper[4803]: I0127 22:11:36.982001 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfw5q\" (UniqueName: \"kubernetes.io/projected/3469063f-f2e9-46a9-bc44-bb35cf4b2149-kube-api-access-vfw5q\") pod \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " Jan 27 22:11:36 crc kubenswrapper[4803]: I0127 22:11:36.982046 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-db-sync-config-data\") pod \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\" (UID: \"3469063f-f2e9-46a9-bc44-bb35cf4b2149\") " Jan 27 22:11:36 crc kubenswrapper[4803]: I0127 22:11:36.991413 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3469063f-f2e9-46a9-bc44-bb35cf4b2149" (UID: "3469063f-f2e9-46a9-bc44-bb35cf4b2149"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:36 crc kubenswrapper[4803]: E0127 22:11:36.997526 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="e867acab-94c1-404c-976b-c1af058a4a24" Jan 27 22:11:36 crc kubenswrapper[4803]: I0127 22:11:36.998376 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3469063f-f2e9-46a9-bc44-bb35cf4b2149-kube-api-access-vfw5q" (OuterVolumeSpecName: "kube-api-access-vfw5q") pod "3469063f-f2e9-46a9-bc44-bb35cf4b2149" (UID: "3469063f-f2e9-46a9-bc44-bb35cf4b2149"). InnerVolumeSpecName "kube-api-access-vfw5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.028667 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3469063f-f2e9-46a9-bc44-bb35cf4b2149" (UID: "3469063f-f2e9-46a9-bc44-bb35cf4b2149"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.085967 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.085998 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfw5q\" (UniqueName: \"kubernetes.io/projected/3469063f-f2e9-46a9-bc44-bb35cf4b2149-kube-api-access-vfw5q\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.086024 4803 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3469063f-f2e9-46a9-bc44-bb35cf4b2149-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.109461 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xmlbc" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.187619 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tznx\" (UniqueName: \"kubernetes.io/projected/6c9761e2-3f55-4c05-be61-594fa9592844-kube-api-access-7tznx\") pod \"6c9761e2-3f55-4c05-be61-594fa9592844\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.187893 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-combined-ca-bundle\") pod \"6c9761e2-3f55-4c05-be61-594fa9592844\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.187975 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-config-data\") pod \"6c9761e2-3f55-4c05-be61-594fa9592844\" (UID: \"6c9761e2-3f55-4c05-be61-594fa9592844\") " Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.192994 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9761e2-3f55-4c05-be61-594fa9592844-kube-api-access-7tznx" (OuterVolumeSpecName: "kube-api-access-7tznx") pod "6c9761e2-3f55-4c05-be61-594fa9592844" (UID: "6c9761e2-3f55-4c05-be61-594fa9592844"). InnerVolumeSpecName "kube-api-access-7tznx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.216319 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c9761e2-3f55-4c05-be61-594fa9592844" (UID: "6c9761e2-3f55-4c05-be61-594fa9592844"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.262944 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-config-data" (OuterVolumeSpecName: "config-data") pod "6c9761e2-3f55-4c05-be61-594fa9592844" (UID: "6c9761e2-3f55-4c05-be61-594fa9592844"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.290889 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.290922 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tznx\" (UniqueName: \"kubernetes.io/projected/6c9761e2-3f55-4c05-be61-594fa9592844-kube-api-access-7tznx\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.290932 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9761e2-3f55-4c05-be61-594fa9592844-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.812027 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xmlbc" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.812016 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xmlbc" event={"ID":"6c9761e2-3f55-4c05-be61-594fa9592844","Type":"ContainerDied","Data":"909792f0f96917226d9142b33fbea9d8a3fc7817d5ba84855232edf4d935f56c"} Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.812368 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="909792f0f96917226d9142b33fbea9d8a3fc7817d5ba84855232edf4d935f56c" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.813836 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e867acab-94c1-404c-976b-c1af058a4a24","Type":"ContainerStarted","Data":"922a920ff55d453255d4e81ba07c21c7552e26c90c41aa81746cfa7d354b3681"} Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.813875 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vntfr" Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.814157 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="sg-core" containerID="cri-o://0eb9a45ca9fc457c7b2bd4580bb21b0a47dbcfe3480890bccbe6c4bb6d1f4212" gracePeriod=30 Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.814176 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="ceilometer-notification-agent" containerID="cri-o://660ed7114a0681ca3b2ad9e6c2672f582f547ac404aea0e2decc165328e70b73" gracePeriod=30 Jan 27 22:11:37 crc kubenswrapper[4803]: I0127 22:11:37.814233 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="proxy-httpd" containerID="cri-o://922a920ff55d453255d4e81ba07c21c7552e26c90c41aa81746cfa7d354b3681" gracePeriod=30 Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.150771 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7d5449dd6c-29g4b"] Jan 27 22:11:38 crc kubenswrapper[4803]: E0127 22:11:38.151227 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c9761e2-3f55-4c05-be61-594fa9592844" containerName="heat-db-sync" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.151246 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c9761e2-3f55-4c05-be61-594fa9592844" containerName="heat-db-sync" Jan 27 22:11:38 crc kubenswrapper[4803]: E0127 22:11:38.151275 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3469063f-f2e9-46a9-bc44-bb35cf4b2149" containerName="barbican-db-sync" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.151282 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3469063f-f2e9-46a9-bc44-bb35cf4b2149" containerName="barbican-db-sync" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.151486 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c9761e2-3f55-4c05-be61-594fa9592844" containerName="heat-db-sync" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.151511 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="3469063f-f2e9-46a9-bc44-bb35cf4b2149" containerName="barbican-db-sync" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.152605 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.158322 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.158323 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.161880 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-49zkh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.181752 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d5449dd6c-29g4b"] Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.231981 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7ff7599c4b-9kdgh"] Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.236343 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.241590 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.247070 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7ff7599c4b-9kdgh"] Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.273273 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv6rd"] Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.275576 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.318686 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-logs\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.318771 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-config-data-custom\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.318816 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7wps\" (UniqueName: \"kubernetes.io/projected/857f23da-b896-42a6-bb08-e30d5e58a207-kube-api-access-q7wps\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.318888 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/857f23da-b896-42a6-bb08-e30d5e58a207-config-data\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.318950 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/857f23da-b896-42a6-bb08-e30d5e58a207-combined-ca-bundle\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.318993 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/857f23da-b896-42a6-bb08-e30d5e58a207-config-data-custom\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.323317 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-combined-ca-bundle\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.323352 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr2pz\" (UniqueName: \"kubernetes.io/projected/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-kube-api-access-cr2pz\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.323390 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/857f23da-b896-42a6-bb08-e30d5e58a207-logs\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.323422 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-config-data\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.339174 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv6rd"] Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.450989 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-599bbf7fdb-qcdcv"] Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.452801 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-config-data-custom\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.452869 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7wps\" (UniqueName: \"kubernetes.io/projected/857f23da-b896-42a6-bb08-e30d5e58a207-kube-api-access-q7wps\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.452968 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453016 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/857f23da-b896-42a6-bb08-e30d5e58a207-config-data\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453067 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453123 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/857f23da-b896-42a6-bb08-e30d5e58a207-combined-ca-bundle\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453261 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453295 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/857f23da-b896-42a6-bb08-e30d5e58a207-config-data-custom\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453420 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-config\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453448 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-combined-ca-bundle\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453466 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr2pz\" (UniqueName: \"kubernetes.io/projected/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-kube-api-access-cr2pz\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453512 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453528 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4pq9\" (UniqueName: \"kubernetes.io/projected/3684371a-0118-4f5a-95bd-66a6ac504ab6-kube-api-access-d4pq9\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453569 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/857f23da-b896-42a6-bb08-e30d5e58a207-logs\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.453600 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-config-data\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.454373 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-logs\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.460958 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/857f23da-b896-42a6-bb08-e30d5e58a207-logs\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.461946 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-logs\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.462530 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.471455 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/857f23da-b896-42a6-bb08-e30d5e58a207-config-data-custom\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.473763 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.478390 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/857f23da-b896-42a6-bb08-e30d5e58a207-config-data\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.479457 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-config-data-custom\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.480103 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-config-data\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.480565 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-combined-ca-bundle\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.481705 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/857f23da-b896-42a6-bb08-e30d5e58a207-combined-ca-bundle\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.499695 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr2pz\" (UniqueName: \"kubernetes.io/projected/2bc83a90-d100-4aaf-b9d1-b41d1791a9f7-kube-api-access-cr2pz\") pod \"barbican-worker-7d5449dd6c-29g4b\" (UID: \"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7\") " pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.506499 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7wps\" (UniqueName: \"kubernetes.io/projected/857f23da-b896-42a6-bb08-e30d5e58a207-kube-api-access-q7wps\") pod \"barbican-keystone-listener-7ff7599c4b-9kdgh\" (UID: \"857f23da-b896-42a6-bb08-e30d5e58a207\") " pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.531896 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-599bbf7fdb-qcdcv"] Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.567961 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-config\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.568055 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4pq9\" (UniqueName: \"kubernetes.io/projected/3684371a-0118-4f5a-95bd-66a6ac504ab6-kube-api-access-d4pq9\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.568078 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.568138 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data-custom\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.568250 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.568295 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc5nn\" (UniqueName: \"kubernetes.io/projected/a40848f3-72e6-4de4-ac01-e68adec94fc2-kube-api-access-sc5nn\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.568329 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-combined-ca-bundle\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.568360 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.568409 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.568442 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.568517 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.568557 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40848f3-72e6-4de4-ac01-e68adec94fc2-logs\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.569457 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-config\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.569921 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.570093 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.570139 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.570177 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.598004 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4pq9\" (UniqueName: \"kubernetes.io/projected/3684371a-0118-4f5a-95bd-66a6ac504ab6-kube-api-access-d4pq9\") pod \"dnsmasq-dns-848cf88cfc-pv6rd\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.606492 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.673544 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40848f3-72e6-4de4-ac01-e68adec94fc2-logs\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.673638 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data-custom\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.673722 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc5nn\" (UniqueName: \"kubernetes.io/projected/a40848f3-72e6-4de4-ac01-e68adec94fc2-kube-api-access-sc5nn\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.673747 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-combined-ca-bundle\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.673786 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.674911 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40848f3-72e6-4de4-ac01-e68adec94fc2-logs\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.677783 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data-custom\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.678416 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-combined-ca-bundle\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.679061 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.696098 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc5nn\" (UniqueName: \"kubernetes.io/projected/a40848f3-72e6-4de4-ac01-e68adec94fc2-kube-api-access-sc5nn\") pod \"barbican-api-599bbf7fdb-qcdcv\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.786399 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d5449dd6c-29g4b" Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.854451 4803 generic.go:334] "Generic (PLEG): container finished" podID="e867acab-94c1-404c-976b-c1af058a4a24" containerID="922a920ff55d453255d4e81ba07c21c7552e26c90c41aa81746cfa7d354b3681" exitCode=0 Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.854501 4803 generic.go:334] "Generic (PLEG): container finished" podID="e867acab-94c1-404c-976b-c1af058a4a24" containerID="0eb9a45ca9fc457c7b2bd4580bb21b0a47dbcfe3480890bccbe6c4bb6d1f4212" exitCode=2 Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.854548 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e867acab-94c1-404c-976b-c1af058a4a24","Type":"ContainerDied","Data":"922a920ff55d453255d4e81ba07c21c7552e26c90c41aa81746cfa7d354b3681"} Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.854596 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e867acab-94c1-404c-976b-c1af058a4a24","Type":"ContainerDied","Data":"0eb9a45ca9fc457c7b2bd4580bb21b0a47dbcfe3480890bccbe6c4bb6d1f4212"} Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.857562 4803 generic.go:334] "Generic (PLEG): container finished" podID="d39e2273-cd2c-4e27-9890-39cf781c7508" containerID="c96c9e346836fe66124e1aa99deb802706dfb5b2e575bd2038955c360ff6ef4d" exitCode=0 Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.857623 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lsh9s" event={"ID":"d39e2273-cd2c-4e27-9890-39cf781c7508","Type":"ContainerDied","Data":"c96c9e346836fe66124e1aa99deb802706dfb5b2e575bd2038955c360ff6ef4d"} Jan 27 22:11:38 crc kubenswrapper[4803]: I0127 22:11:38.917634 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:39 crc kubenswrapper[4803]: I0127 22:11:39.192225 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7ff7599c4b-9kdgh"] Jan 27 22:11:39 crc kubenswrapper[4803]: I0127 22:11:39.217949 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv6rd"] Jan 27 22:11:39 crc kubenswrapper[4803]: I0127 22:11:39.457888 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d5449dd6c-29g4b"] Jan 27 22:11:39 crc kubenswrapper[4803]: W0127 22:11:39.571885 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda40848f3_72e6_4de4_ac01_e68adec94fc2.slice/crio-87dfd14d910a85c7478063065122d032b05824471747f3c4f1f33176af698a8d WatchSource:0}: Error finding container 87dfd14d910a85c7478063065122d032b05824471747f3c4f1f33176af698a8d: Status 404 returned error can't find the container with id 87dfd14d910a85c7478063065122d032b05824471747f3c4f1f33176af698a8d Jan 27 22:11:39 crc kubenswrapper[4803]: I0127 22:11:39.573937 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-599bbf7fdb-qcdcv"] Jan 27 22:11:39 crc kubenswrapper[4803]: I0127 22:11:39.868125 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" event={"ID":"857f23da-b896-42a6-bb08-e30d5e58a207","Type":"ContainerStarted","Data":"efd78cca3073994725b0d67eb47d34bc8523a2a13e80906a7f6c7891703e1a5b"} Jan 27 22:11:39 crc kubenswrapper[4803]: I0127 22:11:39.869698 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5449dd6c-29g4b" event={"ID":"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7","Type":"ContainerStarted","Data":"54fb46686dbd63bacae109fe2ae4e0a5c6f99e7405160f031e2ae1c20545a07f"} Jan 27 22:11:39 crc kubenswrapper[4803]: I0127 22:11:39.872566 4803 generic.go:334] "Generic (PLEG): container finished" podID="3684371a-0118-4f5a-95bd-66a6ac504ab6" containerID="33c37fe848f68034473fa937cb55ed8d9ada7f4540eef4a2620b663029580073" exitCode=0 Jan 27 22:11:39 crc kubenswrapper[4803]: I0127 22:11:39.872600 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" event={"ID":"3684371a-0118-4f5a-95bd-66a6ac504ab6","Type":"ContainerDied","Data":"33c37fe848f68034473fa937cb55ed8d9ada7f4540eef4a2620b663029580073"} Jan 27 22:11:39 crc kubenswrapper[4803]: I0127 22:11:39.872635 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" event={"ID":"3684371a-0118-4f5a-95bd-66a6ac504ab6","Type":"ContainerStarted","Data":"01f200265cdf1ed80803819abbfd65505a3e9d27465fa3342aba476a0c721202"} Jan 27 22:11:39 crc kubenswrapper[4803]: I0127 22:11:39.876443 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-599bbf7fdb-qcdcv" event={"ID":"a40848f3-72e6-4de4-ac01-e68adec94fc2","Type":"ContainerStarted","Data":"f3d49fc150b52ca36b05e5c5f96f6e9924ea37d3d3ee59d60abaeb92cd16709e"} Jan 27 22:11:39 crc kubenswrapper[4803]: I0127 22:11:39.876482 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-599bbf7fdb-qcdcv" event={"ID":"a40848f3-72e6-4de4-ac01-e68adec94fc2","Type":"ContainerStarted","Data":"87dfd14d910a85c7478063065122d032b05824471747f3c4f1f33176af698a8d"} Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.574627 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.651070 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d39e2273-cd2c-4e27-9890-39cf781c7508-etc-machine-id\") pod \"d39e2273-cd2c-4e27-9890-39cf781c7508\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.651184 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-scripts\") pod \"d39e2273-cd2c-4e27-9890-39cf781c7508\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.651193 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39e2273-cd2c-4e27-9890-39cf781c7508-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d39e2273-cd2c-4e27-9890-39cf781c7508" (UID: "d39e2273-cd2c-4e27-9890-39cf781c7508"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.651339 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-db-sync-config-data\") pod \"d39e2273-cd2c-4e27-9890-39cf781c7508\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.651456 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-combined-ca-bundle\") pod \"d39e2273-cd2c-4e27-9890-39cf781c7508\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.651562 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-config-data\") pod \"d39e2273-cd2c-4e27-9890-39cf781c7508\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.651602 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82b4z\" (UniqueName: \"kubernetes.io/projected/d39e2273-cd2c-4e27-9890-39cf781c7508-kube-api-access-82b4z\") pod \"d39e2273-cd2c-4e27-9890-39cf781c7508\" (UID: \"d39e2273-cd2c-4e27-9890-39cf781c7508\") " Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.652779 4803 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d39e2273-cd2c-4e27-9890-39cf781c7508-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.658497 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d39e2273-cd2c-4e27-9890-39cf781c7508" (UID: "d39e2273-cd2c-4e27-9890-39cf781c7508"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.662206 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-scripts" (OuterVolumeSpecName: "scripts") pod "d39e2273-cd2c-4e27-9890-39cf781c7508" (UID: "d39e2273-cd2c-4e27-9890-39cf781c7508"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.671769 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39e2273-cd2c-4e27-9890-39cf781c7508-kube-api-access-82b4z" (OuterVolumeSpecName: "kube-api-access-82b4z") pod "d39e2273-cd2c-4e27-9890-39cf781c7508" (UID: "d39e2273-cd2c-4e27-9890-39cf781c7508"). InnerVolumeSpecName "kube-api-access-82b4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.697805 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d39e2273-cd2c-4e27-9890-39cf781c7508" (UID: "d39e2273-cd2c-4e27-9890-39cf781c7508"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.719705 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-config-data" (OuterVolumeSpecName: "config-data") pod "d39e2273-cd2c-4e27-9890-39cf781c7508" (UID: "d39e2273-cd2c-4e27-9890-39cf781c7508"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.755113 4803 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.755149 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.755159 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.755170 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82b4z\" (UniqueName: \"kubernetes.io/projected/d39e2273-cd2c-4e27-9890-39cf781c7508-kube-api-access-82b4z\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.755181 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39e2273-cd2c-4e27-9890-39cf781c7508-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.904478 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" event={"ID":"3684371a-0118-4f5a-95bd-66a6ac504ab6","Type":"ContainerStarted","Data":"a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca"} Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.905302 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.908878 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lsh9s" event={"ID":"d39e2273-cd2c-4e27-9890-39cf781c7508","Type":"ContainerDied","Data":"dcc1bcf45dc25f7ca00805686b91cdf524da3aba47e6e60533cd83474ffb944a"} Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.908914 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lsh9s" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.908926 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc1bcf45dc25f7ca00805686b91cdf524da3aba47e6e60533cd83474ffb944a" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.914703 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-599bbf7fdb-qcdcv" event={"ID":"a40848f3-72e6-4de4-ac01-e68adec94fc2","Type":"ContainerStarted","Data":"af4f5e910378d94e6a1207127ee81bcd1053a61b73a41a5a651e7c092b1502e0"} Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.914772 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.914788 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.931754 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" podStartSLOduration=2.931737486 podStartE2EDuration="2.931737486s" podCreationTimestamp="2026-01-27 22:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:40.923973947 +0000 UTC m=+1453.339995646" watchObservedRunningTime="2026-01-27 22:11:40.931737486 +0000 UTC m=+1453.347759185" Jan 27 22:11:40 crc kubenswrapper[4803]: I0127 22:11:40.961257 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-599bbf7fdb-qcdcv" podStartSLOduration=2.9612397010000002 podStartE2EDuration="2.961239701s" podCreationTimestamp="2026-01-27 22:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:40.949579137 +0000 UTC m=+1453.365600836" watchObservedRunningTime="2026-01-27 22:11:40.961239701 +0000 UTC m=+1453.377261400" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.194698 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 22:11:41 crc kubenswrapper[4803]: E0127 22:11:41.196116 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39e2273-cd2c-4e27-9890-39cf781c7508" containerName="cinder-db-sync" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.196141 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39e2273-cd2c-4e27-9890-39cf781c7508" containerName="cinder-db-sync" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.199676 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39e2273-cd2c-4e27-9890-39cf781c7508" containerName="cinder-db-sync" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.293634 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.293773 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.332742 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w6ds7" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.333041 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.333144 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.333223 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.351156 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv6rd"] Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.374804 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g8cxd"] Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.395751 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.444158 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g8cxd"] Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.486991 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kn5s\" (UniqueName: \"kubernetes.io/projected/01856d15-d761-4a96-9c28-8de6b7a980e8-kube-api-access-5kn5s\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.487062 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.487161 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01856d15-d761-4a96-9c28-8de6b7a980e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.487252 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.487303 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.487395 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589308 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589400 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589427 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589472 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kn5s\" (UniqueName: \"kubernetes.io/projected/01856d15-d761-4a96-9c28-8de6b7a980e8-kube-api-access-5kn5s\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589503 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589522 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589552 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589589 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-config\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589614 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01856d15-d761-4a96-9c28-8de6b7a980e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589649 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589685 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.589700 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg9hl\" (UniqueName: \"kubernetes.io/projected/f5bae15d-ce93-43a9-8fc4-49200676a31d-kube-api-access-vg9hl\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.590865 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01856d15-d761-4a96-9c28-8de6b7a980e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.608626 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.608868 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.620030 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.622398 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.626003 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.626285 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.626534 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.626672 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kn5s\" (UniqueName: \"kubernetes.io/projected/01856d15-d761-4a96-9c28-8de6b7a980e8-kube-api-access-5kn5s\") pod \"cinder-scheduler-0\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.652045 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.694238 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.694344 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.694391 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.694450 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-config\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.694500 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.694537 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg9hl\" (UniqueName: \"kubernetes.io/projected/f5bae15d-ce93-43a9-8fc4-49200676a31d-kube-api-access-vg9hl\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.698452 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.703232 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.706517 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.707143 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.708239 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-config\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.708412 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.725726 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg9hl\" (UniqueName: \"kubernetes.io/projected/f5bae15d-ce93-43a9-8fc4-49200676a31d-kube-api-access-vg9hl\") pod \"dnsmasq-dns-6578955fd5-g8cxd\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.740351 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.795906 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2klw\" (UniqueName: \"kubernetes.io/projected/53f3ea29-9273-4b38-8f97-0821042ab7fc-kube-api-access-k2klw\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.796272 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data-custom\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.796364 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-scripts\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.796405 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.796499 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.796556 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f3ea29-9273-4b38-8f97-0821042ab7fc-logs\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.796597 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f3ea29-9273-4b38-8f97-0821042ab7fc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.900287 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f3ea29-9273-4b38-8f97-0821042ab7fc-logs\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.900606 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f3ea29-9273-4b38-8f97-0821042ab7fc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.900670 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2klw\" (UniqueName: \"kubernetes.io/projected/53f3ea29-9273-4b38-8f97-0821042ab7fc-kube-api-access-k2klw\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.900701 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data-custom\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.900785 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-scripts\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.900814 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.900905 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.902371 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f3ea29-9273-4b38-8f97-0821042ab7fc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.925558 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-scripts\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.926238 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f3ea29-9273-4b38-8f97-0821042ab7fc-logs\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.927205 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.927345 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.927727 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2klw\" (UniqueName: \"kubernetes.io/projected/53f3ea29-9273-4b38-8f97-0821042ab7fc-kube-api-access-k2klw\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.934343 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data-custom\") pod \"cinder-api-0\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " pod="openstack/cinder-api-0" Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.980808 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" event={"ID":"857f23da-b896-42a6-bb08-e30d5e58a207","Type":"ContainerStarted","Data":"db54e3a85d78315a0c974ec7c6aea969dcab527d6b4f7744404c6e646f6bc7b5"} Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.980869 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" event={"ID":"857f23da-b896-42a6-bb08-e30d5e58a207","Type":"ContainerStarted","Data":"9a0d0e99eed2130a169730c4b5424a4dbe1f99e70d4e795a8da1505bdc542f92"} Jan 27 22:11:41 crc kubenswrapper[4803]: I0127 22:11:41.992637 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5449dd6c-29g4b" event={"ID":"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7","Type":"ContainerStarted","Data":"22d6d15cd64f4fb54f2d313f1c5cfbca3d9933b7d667e6dcf0f006ae92674311"} Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.120399 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-f74474d96-gcmdd"] Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.122886 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.126687 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.128746 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.133505 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f74474d96-gcmdd"] Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.227335 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.306686 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.310824 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0ec158b-9237-431b-a0ac-0b6d236706b3-logs\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.310872 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-config-data-custom\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.310895 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-combined-ca-bundle\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.311019 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-config-data\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.311042 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-internal-tls-certs\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.312033 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-public-tls-certs\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.312497 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcz7j\" (UniqueName: \"kubernetes.io/projected/d0ec158b-9237-431b-a0ac-0b6d236706b3-kube-api-access-qcz7j\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: W0127 22:11:42.343075 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01856d15_d761_4a96_9c28_8de6b7a980e8.slice/crio-0713cb66cc2c0d825bd5f31438519e249e8f68878f2039eaa3595b4d2abc5715 WatchSource:0}: Error finding container 0713cb66cc2c0d825bd5f31438519e249e8f68878f2039eaa3595b4d2abc5715: Status 404 returned error can't find the container with id 0713cb66cc2c0d825bd5f31438519e249e8f68878f2039eaa3595b4d2abc5715 Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.417097 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-public-tls-certs\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.417208 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcz7j\" (UniqueName: \"kubernetes.io/projected/d0ec158b-9237-431b-a0ac-0b6d236706b3-kube-api-access-qcz7j\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.417270 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0ec158b-9237-431b-a0ac-0b6d236706b3-logs\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.417303 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-config-data-custom\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.417335 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-combined-ca-bundle\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.417567 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-config-data\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.417604 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-internal-tls-certs\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.419042 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0ec158b-9237-431b-a0ac-0b6d236706b3-logs\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.425500 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-config-data\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.426745 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-combined-ca-bundle\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.432633 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-config-data-custom\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.439628 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-internal-tls-certs\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.439976 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0ec158b-9237-431b-a0ac-0b6d236706b3-public-tls-certs\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.441994 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcz7j\" (UniqueName: \"kubernetes.io/projected/d0ec158b-9237-431b-a0ac-0b6d236706b3-kube-api-access-qcz7j\") pod \"barbican-api-f74474d96-gcmdd\" (UID: \"d0ec158b-9237-431b-a0ac-0b6d236706b3\") " pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:42 crc kubenswrapper[4803]: W0127 22:11:42.459468 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5bae15d_ce93_43a9_8fc4_49200676a31d.slice/crio-c49231532faccce31ffcc8a6d7fe45e93cc66644c031ddf12a0aab35284bbadc WatchSource:0}: Error finding container c49231532faccce31ffcc8a6d7fe45e93cc66644c031ddf12a0aab35284bbadc: Status 404 returned error can't find the container with id c49231532faccce31ffcc8a6d7fe45e93cc66644c031ddf12a0aab35284bbadc Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.463981 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g8cxd"] Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.725674 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 22:11:42 crc kubenswrapper[4803]: I0127 22:11:42.741819 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.028065 4803 generic.go:334] "Generic (PLEG): container finished" podID="f5bae15d-ce93-43a9-8fc4-49200676a31d" containerID="94d2b53bca0a76ae46cac772ed7d80b1de8b0e0f4ea481215e9667db782a9193" exitCode=0 Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.028289 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" event={"ID":"f5bae15d-ce93-43a9-8fc4-49200676a31d","Type":"ContainerDied","Data":"94d2b53bca0a76ae46cac772ed7d80b1de8b0e0f4ea481215e9667db782a9193"} Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.028453 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" event={"ID":"f5bae15d-ce93-43a9-8fc4-49200676a31d","Type":"ContainerStarted","Data":"c49231532faccce31ffcc8a6d7fe45e93cc66644c031ddf12a0aab35284bbadc"} Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.037670 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"53f3ea29-9273-4b38-8f97-0821042ab7fc","Type":"ContainerStarted","Data":"65b598bc94ae4c7ac5698c8b44f94e61ebb2d665fdf05b0f48a72fba24868bed"} Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.051473 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01856d15-d761-4a96-9c28-8de6b7a980e8","Type":"ContainerStarted","Data":"0713cb66cc2c0d825bd5f31438519e249e8f68878f2039eaa3595b4d2abc5715"} Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.068182 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5449dd6c-29g4b" event={"ID":"2bc83a90-d100-4aaf-b9d1-b41d1791a9f7","Type":"ContainerStarted","Data":"51212a755be3663d64b4fc544996c58902dbc2e95f158f7d93383fa3ab002f9a"} Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.120444 4803 generic.go:334] "Generic (PLEG): container finished" podID="e867acab-94c1-404c-976b-c1af058a4a24" containerID="660ed7114a0681ca3b2ad9e6c2672f582f547ac404aea0e2decc165328e70b73" exitCode=0 Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.120657 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" podUID="3684371a-0118-4f5a-95bd-66a6ac504ab6" containerName="dnsmasq-dns" containerID="cri-o://a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca" gracePeriod=10 Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.120935 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e867acab-94c1-404c-976b-c1af058a4a24","Type":"ContainerDied","Data":"660ed7114a0681ca3b2ad9e6c2672f582f547ac404aea0e2decc165328e70b73"} Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.132786 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7d5449dd6c-29g4b" podStartSLOduration=3.458808201 podStartE2EDuration="5.132763508s" podCreationTimestamp="2026-01-27 22:11:38 +0000 UTC" firstStartedPulling="2026-01-27 22:11:39.471285882 +0000 UTC m=+1451.887307571" lastFinishedPulling="2026-01-27 22:11:41.145241179 +0000 UTC m=+1453.561262878" observedRunningTime="2026-01-27 22:11:43.117491706 +0000 UTC m=+1455.533513405" watchObservedRunningTime="2026-01-27 22:11:43.132763508 +0000 UTC m=+1455.548785197" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.148623 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7ff7599c4b-9kdgh" podStartSLOduration=3.742154024 podStartE2EDuration="5.148599544s" podCreationTimestamp="2026-01-27 22:11:38 +0000 UTC" firstStartedPulling="2026-01-27 22:11:39.21556382 +0000 UTC m=+1451.631585519" lastFinishedPulling="2026-01-27 22:11:40.62200934 +0000 UTC m=+1453.038031039" observedRunningTime="2026-01-27 22:11:43.141258757 +0000 UTC m=+1455.557280456" watchObservedRunningTime="2026-01-27 22:11:43.148599544 +0000 UTC m=+1455.564621243" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.274017 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f74474d96-gcmdd"] Jan 27 22:11:43 crc kubenswrapper[4803]: W0127 22:11:43.382026 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0ec158b_9237_431b_a0ac_0b6d236706b3.slice/crio-57c288f7a8c3fa59ee3d6b37f90f41b802a5901da99a906e790a76101cf93518 WatchSource:0}: Error finding container 57c288f7a8c3fa59ee3d6b37f90f41b802a5901da99a906e790a76101cf93518: Status 404 returned error can't find the container with id 57c288f7a8c3fa59ee3d6b37f90f41b802a5901da99a906e790a76101cf93518 Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.497793 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.660877 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-sg-core-conf-yaml\") pod \"e867acab-94c1-404c-976b-c1af058a4a24\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.661002 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-scripts\") pod \"e867acab-94c1-404c-976b-c1af058a4a24\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.661027 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-log-httpd\") pod \"e867acab-94c1-404c-976b-c1af058a4a24\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.661182 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-combined-ca-bundle\") pod \"e867acab-94c1-404c-976b-c1af058a4a24\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.661254 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-run-httpd\") pod \"e867acab-94c1-404c-976b-c1af058a4a24\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.661373 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5bqb\" (UniqueName: \"kubernetes.io/projected/e867acab-94c1-404c-976b-c1af058a4a24-kube-api-access-j5bqb\") pod \"e867acab-94c1-404c-976b-c1af058a4a24\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.661472 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-config-data\") pod \"e867acab-94c1-404c-976b-c1af058a4a24\" (UID: \"e867acab-94c1-404c-976b-c1af058a4a24\") " Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.663317 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e867acab-94c1-404c-976b-c1af058a4a24" (UID: "e867acab-94c1-404c-976b-c1af058a4a24"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.663809 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e867acab-94c1-404c-976b-c1af058a4a24" (UID: "e867acab-94c1-404c-976b-c1af058a4a24"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.665827 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.665897 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e867acab-94c1-404c-976b-c1af058a4a24-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.683525 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e867acab-94c1-404c-976b-c1af058a4a24-kube-api-access-j5bqb" (OuterVolumeSpecName: "kube-api-access-j5bqb") pod "e867acab-94c1-404c-976b-c1af058a4a24" (UID: "e867acab-94c1-404c-976b-c1af058a4a24"). InnerVolumeSpecName "kube-api-access-j5bqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.683749 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-scripts" (OuterVolumeSpecName: "scripts") pod "e867acab-94c1-404c-976b-c1af058a4a24" (UID: "e867acab-94c1-404c-976b-c1af058a4a24"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.769523 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5bqb\" (UniqueName: \"kubernetes.io/projected/e867acab-94c1-404c-976b-c1af058a4a24-kube-api-access-j5bqb\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.769555 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.795293 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e867acab-94c1-404c-976b-c1af058a4a24" (UID: "e867acab-94c1-404c-976b-c1af058a4a24"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.870229 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e867acab-94c1-404c-976b-c1af058a4a24" (UID: "e867acab-94c1-404c-976b-c1af058a4a24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.873307 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.873347 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.920902 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-config-data" (OuterVolumeSpecName: "config-data") pod "e867acab-94c1-404c-976b-c1af058a4a24" (UID: "e867acab-94c1-404c-976b-c1af058a4a24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:43 crc kubenswrapper[4803]: I0127 22:11:43.979495 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e867acab-94c1-404c-976b-c1af058a4a24-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.044887 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.154733 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f74474d96-gcmdd" event={"ID":"d0ec158b-9237-431b-a0ac-0b6d236706b3","Type":"ContainerStarted","Data":"57c288f7a8c3fa59ee3d6b37f90f41b802a5901da99a906e790a76101cf93518"} Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.160307 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e867acab-94c1-404c-976b-c1af058a4a24","Type":"ContainerDied","Data":"2f28c003f9f68c2e51b428f5e0c37eb7ca5b27d11eb875d0bce08bf7fcb04ab0"} Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.160354 4803 scope.go:117] "RemoveContainer" containerID="922a920ff55d453255d4e81ba07c21c7552e26c90c41aa81746cfa7d354b3681" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.160476 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.166032 4803 generic.go:334] "Generic (PLEG): container finished" podID="3684371a-0118-4f5a-95bd-66a6ac504ab6" containerID="a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca" exitCode=0 Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.166110 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.166125 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" event={"ID":"3684371a-0118-4f5a-95bd-66a6ac504ab6","Type":"ContainerDied","Data":"a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca"} Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.166158 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pv6rd" event={"ID":"3684371a-0118-4f5a-95bd-66a6ac504ab6","Type":"ContainerDied","Data":"01f200265cdf1ed80803819abbfd65505a3e9d27465fa3342aba476a0c721202"} Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.172304 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" event={"ID":"f5bae15d-ce93-43a9-8fc4-49200676a31d","Type":"ContainerStarted","Data":"02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508"} Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.172456 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.182730 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-config\") pod \"3684371a-0118-4f5a-95bd-66a6ac504ab6\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.182934 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-nb\") pod \"3684371a-0118-4f5a-95bd-66a6ac504ab6\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.182969 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-sb\") pod \"3684371a-0118-4f5a-95bd-66a6ac504ab6\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.182994 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-svc\") pod \"3684371a-0118-4f5a-95bd-66a6ac504ab6\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.183019 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4pq9\" (UniqueName: \"kubernetes.io/projected/3684371a-0118-4f5a-95bd-66a6ac504ab6-kube-api-access-d4pq9\") pod \"3684371a-0118-4f5a-95bd-66a6ac504ab6\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.183148 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-swift-storage-0\") pod \"3684371a-0118-4f5a-95bd-66a6ac504ab6\" (UID: \"3684371a-0118-4f5a-95bd-66a6ac504ab6\") " Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.218513 4803 scope.go:117] "RemoveContainer" containerID="0eb9a45ca9fc457c7b2bd4580bb21b0a47dbcfe3480890bccbe6c4bb6d1f4212" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.219991 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" podStartSLOduration=3.219968915 podStartE2EDuration="3.219968915s" podCreationTimestamp="2026-01-27 22:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:44.205925806 +0000 UTC m=+1456.621947515" watchObservedRunningTime="2026-01-27 22:11:44.219968915 +0000 UTC m=+1456.635990614" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.270109 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3684371a-0118-4f5a-95bd-66a6ac504ab6-kube-api-access-d4pq9" (OuterVolumeSpecName: "kube-api-access-d4pq9") pod "3684371a-0118-4f5a-95bd-66a6ac504ab6" (UID: "3684371a-0118-4f5a-95bd-66a6ac504ab6"). InnerVolumeSpecName "kube-api-access-d4pq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.274769 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.286515 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4pq9\" (UniqueName: \"kubernetes.io/projected/3684371a-0118-4f5a-95bd-66a6ac504ab6-kube-api-access-d4pq9\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.351433 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.351822 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:11:44 crc kubenswrapper[4803]: E0127 22:11:44.352324 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="proxy-httpd" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.352341 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="proxy-httpd" Jan 27 22:11:44 crc kubenswrapper[4803]: E0127 22:11:44.352361 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3684371a-0118-4f5a-95bd-66a6ac504ab6" containerName="init" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.352367 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3684371a-0118-4f5a-95bd-66a6ac504ab6" containerName="init" Jan 27 22:11:44 crc kubenswrapper[4803]: E0127 22:11:44.352397 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="sg-core" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.352404 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="sg-core" Jan 27 22:11:44 crc kubenswrapper[4803]: E0127 22:11:44.352421 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="ceilometer-notification-agent" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.352427 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="ceilometer-notification-agent" Jan 27 22:11:44 crc kubenswrapper[4803]: E0127 22:11:44.352440 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3684371a-0118-4f5a-95bd-66a6ac504ab6" containerName="dnsmasq-dns" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.352446 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3684371a-0118-4f5a-95bd-66a6ac504ab6" containerName="dnsmasq-dns" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.352632 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="proxy-httpd" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.352648 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="3684371a-0118-4f5a-95bd-66a6ac504ab6" containerName="dnsmasq-dns" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.352658 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="ceilometer-notification-agent" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.352670 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e867acab-94c1-404c-976b-c1af058a4a24" containerName="sg-core" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.359487 4803 scope.go:117] "RemoveContainer" containerID="660ed7114a0681ca3b2ad9e6c2672f582f547ac404aea0e2decc165328e70b73" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.360127 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.360238 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.364634 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.364868 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.394049 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-config" (OuterVolumeSpecName: "config") pod "3684371a-0118-4f5a-95bd-66a6ac504ab6" (UID: "3684371a-0118-4f5a-95bd-66a6ac504ab6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.405726 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3684371a-0118-4f5a-95bd-66a6ac504ab6" (UID: "3684371a-0118-4f5a-95bd-66a6ac504ab6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.418770 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3684371a-0118-4f5a-95bd-66a6ac504ab6" (UID: "3684371a-0118-4f5a-95bd-66a6ac504ab6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.423415 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3684371a-0118-4f5a-95bd-66a6ac504ab6" (UID: "3684371a-0118-4f5a-95bd-66a6ac504ab6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.427727 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3684371a-0118-4f5a-95bd-66a6ac504ab6" (UID: "3684371a-0118-4f5a-95bd-66a6ac504ab6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.462749 4803 scope.go:117] "RemoveContainer" containerID="a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490428 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-run-httpd\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490480 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-config-data\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490559 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490581 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490666 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xbwb\" (UniqueName: \"kubernetes.io/projected/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-kube-api-access-6xbwb\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490697 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-log-httpd\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490787 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-scripts\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490872 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490884 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490894 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490904 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.490916 4803 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3684371a-0118-4f5a-95bd-66a6ac504ab6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.547573 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv6rd"] Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.549899 4803 scope.go:117] "RemoveContainer" containerID="33c37fe848f68034473fa937cb55ed8d9ada7f4540eef4a2620b663029580073" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.578542 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv6rd"] Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.593999 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.594255 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.594495 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xbwb\" (UniqueName: \"kubernetes.io/projected/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-kube-api-access-6xbwb\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.594658 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-log-httpd\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.595060 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-scripts\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.595192 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-run-httpd\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.595333 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-config-data\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.597536 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-log-httpd\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.598833 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-run-httpd\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.599194 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.602305 4803 scope.go:117] "RemoveContainer" containerID="a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.603357 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.604972 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-scripts\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.612642 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-config-data\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: E0127 22:11:44.615726 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca\": container with ID starting with a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca not found: ID does not exist" containerID="a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.616117 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca"} err="failed to get container status \"a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca\": rpc error: code = NotFound desc = could not find container \"a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca\": container with ID starting with a832949eabc9ebd719246eb68fcc1c300d04a0a81ca489231e984f1816ad71ca not found: ID does not exist" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.616270 4803 scope.go:117] "RemoveContainer" containerID="33c37fe848f68034473fa937cb55ed8d9ada7f4540eef4a2620b663029580073" Jan 27 22:11:44 crc kubenswrapper[4803]: E0127 22:11:44.623894 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33c37fe848f68034473fa937cb55ed8d9ada7f4540eef4a2620b663029580073\": container with ID starting with 33c37fe848f68034473fa937cb55ed8d9ada7f4540eef4a2620b663029580073 not found: ID does not exist" containerID="33c37fe848f68034473fa937cb55ed8d9ada7f4540eef4a2620b663029580073" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.624136 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c37fe848f68034473fa937cb55ed8d9ada7f4540eef4a2620b663029580073"} err="failed to get container status \"33c37fe848f68034473fa937cb55ed8d9ada7f4540eef4a2620b663029580073\": rpc error: code = NotFound desc = could not find container \"33c37fe848f68034473fa937cb55ed8d9ada7f4540eef4a2620b663029580073\": container with ID starting with 33c37fe848f68034473fa937cb55ed8d9ada7f4540eef4a2620b663029580073 not found: ID does not exist" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.625015 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xbwb\" (UniqueName: \"kubernetes.io/projected/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-kube-api-access-6xbwb\") pod \"ceilometer-0\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " pod="openstack/ceilometer-0" Jan 27 22:11:44 crc kubenswrapper[4803]: I0127 22:11:44.739635 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:11:45 crc kubenswrapper[4803]: I0127 22:11:45.224164 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"53f3ea29-9273-4b38-8f97-0821042ab7fc","Type":"ContainerStarted","Data":"3b62c9e9d1049e90333341856cdf9135fc3721b0df09dc45acd7a1a1c2f348ce"} Jan 27 22:11:45 crc kubenswrapper[4803]: I0127 22:11:45.227699 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f74474d96-gcmdd" event={"ID":"d0ec158b-9237-431b-a0ac-0b6d236706b3","Type":"ContainerStarted","Data":"b8504d1f3aad928e977a9fc302214b1000baee2f14a7b541d8c89ebaa6c1d302"} Jan 27 22:11:45 crc kubenswrapper[4803]: I0127 22:11:45.227747 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f74474d96-gcmdd" event={"ID":"d0ec158b-9237-431b-a0ac-0b6d236706b3","Type":"ContainerStarted","Data":"a06046b6459b07b50133f1e32c68ffea93488a7cf0020de5bc324ddf927ad8d5"} Jan 27 22:11:45 crc kubenswrapper[4803]: I0127 22:11:45.227882 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:45 crc kubenswrapper[4803]: I0127 22:11:45.232574 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01856d15-d761-4a96-9c28-8de6b7a980e8","Type":"ContainerStarted","Data":"0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4"} Jan 27 22:11:45 crc kubenswrapper[4803]: I0127 22:11:45.246568 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 22:11:45 crc kubenswrapper[4803]: I0127 22:11:45.263723 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-f74474d96-gcmdd" podStartSLOduration=3.26370277 podStartE2EDuration="3.26370277s" podCreationTimestamp="2026-01-27 22:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:45.25478775 +0000 UTC m=+1457.670809449" watchObservedRunningTime="2026-01-27 22:11:45.26370277 +0000 UTC m=+1457.679724469" Jan 27 22:11:45 crc kubenswrapper[4803]: I0127 22:11:45.328978 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.195437 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jqw45"] Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.214948 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.221292 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jqw45"] Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.275994 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e408c3-f14c-4cad-a5b5-24d601fcb8d8","Type":"ContainerStarted","Data":"6e1ccfd92241094c36c6597a9ca17f2e07201b2ffbb909421e10fcfd8b58d09f"} Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.276036 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e408c3-f14c-4cad-a5b5-24d601fcb8d8","Type":"ContainerStarted","Data":"fa450a52a32a5c6b8a3fa7591fb01464b7cc4bab478e16e2e58238c9f70b3cd8"} Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.277432 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"53f3ea29-9273-4b38-8f97-0821042ab7fc","Type":"ContainerStarted","Data":"941855844e2addd58dcf242d6a56a1b21d6c296a848378eda90aff8647ab89ac"} Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.277566 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="53f3ea29-9273-4b38-8f97-0821042ab7fc" containerName="cinder-api-log" containerID="cri-o://3b62c9e9d1049e90333341856cdf9135fc3721b0df09dc45acd7a1a1c2f348ce" gracePeriod=30 Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.277834 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.278143 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="53f3ea29-9273-4b38-8f97-0821042ab7fc" containerName="cinder-api" containerID="cri-o://941855844e2addd58dcf242d6a56a1b21d6c296a848378eda90aff8647ab89ac" gracePeriod=30 Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.283202 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01856d15-d761-4a96-9c28-8de6b7a980e8","Type":"ContainerStarted","Data":"fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26"} Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.283292 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.323901 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.323842898 podStartE2EDuration="5.323842898s" podCreationTimestamp="2026-01-27 22:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:46.301449105 +0000 UTC m=+1458.717470814" watchObservedRunningTime="2026-01-27 22:11:46.323842898 +0000 UTC m=+1458.739864597" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.334668 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.189946794 podStartE2EDuration="5.334648519s" podCreationTimestamp="2026-01-27 22:11:41 +0000 UTC" firstStartedPulling="2026-01-27 22:11:42.346293535 +0000 UTC m=+1454.762315234" lastFinishedPulling="2026-01-27 22:11:43.49099526 +0000 UTC m=+1455.907016959" observedRunningTime="2026-01-27 22:11:46.330158538 +0000 UTC m=+1458.746180237" watchObservedRunningTime="2026-01-27 22:11:46.334648519 +0000 UTC m=+1458.750670218" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.344507 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3684371a-0118-4f5a-95bd-66a6ac504ab6" path="/var/lib/kubelet/pods/3684371a-0118-4f5a-95bd-66a6ac504ab6/volumes" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.346105 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.346202 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.349517 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e867acab-94c1-404c-976b-c1af058a4a24" path="/var/lib/kubelet/pods/e867acab-94c1-404c-976b-c1af058a4a24/volumes" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.350255 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.354467 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-utilities\") pod \"redhat-operators-jqw45\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.354552 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42fl5\" (UniqueName: \"kubernetes.io/projected/8557daa0-d032-4ce3-845b-2ff667b49c7a-kube-api-access-42fl5\") pod \"redhat-operators-jqw45\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.354601 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-catalog-content\") pod \"redhat-operators-jqw45\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.456524 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-utilities\") pod \"redhat-operators-jqw45\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.457060 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42fl5\" (UniqueName: \"kubernetes.io/projected/8557daa0-d032-4ce3-845b-2ff667b49c7a-kube-api-access-42fl5\") pod \"redhat-operators-jqw45\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.457230 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-utilities\") pod \"redhat-operators-jqw45\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.457247 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-catalog-content\") pod \"redhat-operators-jqw45\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.457726 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-catalog-content\") pod \"redhat-operators-jqw45\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.477535 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42fl5\" (UniqueName: \"kubernetes.io/projected/8557daa0-d032-4ce3-845b-2ff667b49c7a-kube-api-access-42fl5\") pod \"redhat-operators-jqw45\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.537886 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.707246 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.709565 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6757ddbf5c-pprm6"] Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.709831 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6757ddbf5c-pprm6" podUID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerName="neutron-api" containerID="cri-o://78568fb3db5c74fd564077b566b910d6edb0c0f3c55607d46b0f159f38873b29" gracePeriod=30 Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.709990 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6757ddbf5c-pprm6" podUID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerName="neutron-httpd" containerID="cri-o://69eec6eafa07ee07a80b49fcc5b45fd29e0818680f57443cc5152c5e9613a0e8" gracePeriod=30 Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.750904 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7f556d549c-2bkn4"] Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.753315 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.794343 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f556d549c-2bkn4"] Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.870351 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-config\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.870718 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s6qb\" (UniqueName: \"kubernetes.io/projected/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-kube-api-access-2s6qb\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.870767 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-httpd-config\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.870789 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-ovndb-tls-certs\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.870812 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-combined-ca-bundle\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.870891 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-internal-tls-certs\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.870966 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-public-tls-certs\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.976567 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-config\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.976636 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s6qb\" (UniqueName: \"kubernetes.io/projected/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-kube-api-access-2s6qb\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.976740 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-httpd-config\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.976767 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-ovndb-tls-certs\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.976797 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-combined-ca-bundle\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.976932 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-internal-tls-certs\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.977080 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-public-tls-certs\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.989741 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-httpd-config\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:46 crc kubenswrapper[4803]: I0127 22:11:46.999678 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-config\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.008265 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-public-tls-certs\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.019640 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-ovndb-tls-certs\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.023012 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-combined-ca-bundle\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.028545 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-internal-tls-certs\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.064655 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s6qb\" (UniqueName: \"kubernetes.io/projected/9d70e5d4-03d3-451e-9ef4-8f88d42a015c-kube-api-access-2s6qb\") pod \"neutron-7f556d549c-2bkn4\" (UID: \"9d70e5d4-03d3-451e-9ef4-8f88d42a015c\") " pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.120074 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6757ddbf5c-pprm6" podUID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.199:9696/\": read tcp 10.217.0.2:41178->10.217.0.199:9696: read: connection reset by peer" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.120804 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.317639 4803 generic.go:334] "Generic (PLEG): container finished" podID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerID="69eec6eafa07ee07a80b49fcc5b45fd29e0818680f57443cc5152c5e9613a0e8" exitCode=0 Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.317941 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6757ddbf5c-pprm6" event={"ID":"f4a1a8ca-af9c-47d3-82a6-1ce97b165924","Type":"ContainerDied","Data":"69eec6eafa07ee07a80b49fcc5b45fd29e0818680f57443cc5152c5e9613a0e8"} Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.323979 4803 generic.go:334] "Generic (PLEG): container finished" podID="53f3ea29-9273-4b38-8f97-0821042ab7fc" containerID="941855844e2addd58dcf242d6a56a1b21d6c296a848378eda90aff8647ab89ac" exitCode=0 Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.324008 4803 generic.go:334] "Generic (PLEG): container finished" podID="53f3ea29-9273-4b38-8f97-0821042ab7fc" containerID="3b62c9e9d1049e90333341856cdf9135fc3721b0df09dc45acd7a1a1c2f348ce" exitCode=143 Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.324922 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"53f3ea29-9273-4b38-8f97-0821042ab7fc","Type":"ContainerDied","Data":"941855844e2addd58dcf242d6a56a1b21d6c296a848378eda90aff8647ab89ac"} Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.324978 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"53f3ea29-9273-4b38-8f97-0821042ab7fc","Type":"ContainerDied","Data":"3b62c9e9d1049e90333341856cdf9135fc3721b0df09dc45acd7a1a1c2f348ce"} Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.397013 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jqw45"] Jan 27 22:11:47 crc kubenswrapper[4803]: E0127 22:11:47.411052 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4a1a8ca_af9c_47d3_82a6_1ce97b165924.slice/crio-69eec6eafa07ee07a80b49fcc5b45fd29e0818680f57443cc5152c5e9613a0e8.scope\": RecentStats: unable to find data in memory cache]" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.511943 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.608426 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data-custom\") pod \"53f3ea29-9273-4b38-8f97-0821042ab7fc\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.608503 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data\") pod \"53f3ea29-9273-4b38-8f97-0821042ab7fc\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.608629 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-scripts\") pod \"53f3ea29-9273-4b38-8f97-0821042ab7fc\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.608663 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-combined-ca-bundle\") pod \"53f3ea29-9273-4b38-8f97-0821042ab7fc\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.608728 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f3ea29-9273-4b38-8f97-0821042ab7fc-logs\") pod \"53f3ea29-9273-4b38-8f97-0821042ab7fc\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.608766 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2klw\" (UniqueName: \"kubernetes.io/projected/53f3ea29-9273-4b38-8f97-0821042ab7fc-kube-api-access-k2klw\") pod \"53f3ea29-9273-4b38-8f97-0821042ab7fc\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.608798 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f3ea29-9273-4b38-8f97-0821042ab7fc-etc-machine-id\") pod \"53f3ea29-9273-4b38-8f97-0821042ab7fc\" (UID: \"53f3ea29-9273-4b38-8f97-0821042ab7fc\") " Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.609876 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f3ea29-9273-4b38-8f97-0821042ab7fc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "53f3ea29-9273-4b38-8f97-0821042ab7fc" (UID: "53f3ea29-9273-4b38-8f97-0821042ab7fc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.613155 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53f3ea29-9273-4b38-8f97-0821042ab7fc-logs" (OuterVolumeSpecName: "logs") pod "53f3ea29-9273-4b38-8f97-0821042ab7fc" (UID: "53f3ea29-9273-4b38-8f97-0821042ab7fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.621828 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-scripts" (OuterVolumeSpecName: "scripts") pod "53f3ea29-9273-4b38-8f97-0821042ab7fc" (UID: "53f3ea29-9273-4b38-8f97-0821042ab7fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.621829 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "53f3ea29-9273-4b38-8f97-0821042ab7fc" (UID: "53f3ea29-9273-4b38-8f97-0821042ab7fc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.624838 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f3ea29-9273-4b38-8f97-0821042ab7fc-kube-api-access-k2klw" (OuterVolumeSpecName: "kube-api-access-k2klw") pod "53f3ea29-9273-4b38-8f97-0821042ab7fc" (UID: "53f3ea29-9273-4b38-8f97-0821042ab7fc"). InnerVolumeSpecName "kube-api-access-k2klw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.666679 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53f3ea29-9273-4b38-8f97-0821042ab7fc" (UID: "53f3ea29-9273-4b38-8f97-0821042ab7fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.704230 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data" (OuterVolumeSpecName: "config-data") pod "53f3ea29-9273-4b38-8f97-0821042ab7fc" (UID: "53f3ea29-9273-4b38-8f97-0821042ab7fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.711926 4803 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.711954 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.711963 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.711972 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f3ea29-9273-4b38-8f97-0821042ab7fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.711980 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f3ea29-9273-4b38-8f97-0821042ab7fc-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.711991 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2klw\" (UniqueName: \"kubernetes.io/projected/53f3ea29-9273-4b38-8f97-0821042ab7fc-kube-api-access-k2klw\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.712003 4803 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f3ea29-9273-4b38-8f97-0821042ab7fc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:47 crc kubenswrapper[4803]: I0127 22:11:47.783753 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f556d549c-2bkn4"] Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.344596 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e408c3-f14c-4cad-a5b5-24d601fcb8d8","Type":"ContainerStarted","Data":"d32abfec1dd9f35d0b6cd9609c2c5f37e4690190ccce6e6a5a29363e6fdaa8eb"} Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.353615 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f556d549c-2bkn4" event={"ID":"9d70e5d4-03d3-451e-9ef4-8f88d42a015c","Type":"ContainerStarted","Data":"38327f48a627182d74b011c9e38b7ab50459c3612d8a50aa2da8561b5c5a272f"} Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.353671 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f556d549c-2bkn4" event={"ID":"9d70e5d4-03d3-451e-9ef4-8f88d42a015c","Type":"ContainerStarted","Data":"7a3e1a9b2549ee079b2240ddc44e87cf9f064522a6669c4ff081928f4c44236d"} Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.357989 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"53f3ea29-9273-4b38-8f97-0821042ab7fc","Type":"ContainerDied","Data":"65b598bc94ae4c7ac5698c8b44f94e61ebb2d665fdf05b0f48a72fba24868bed"} Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.358054 4803 scope.go:117] "RemoveContainer" containerID="941855844e2addd58dcf242d6a56a1b21d6c296a848378eda90aff8647ab89ac" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.358107 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.373003 4803 generic.go:334] "Generic (PLEG): container finished" podID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerID="14b1690e6be58945815d2b583a94eb3e93557c1bd32cc470a5a63069f162fd95" exitCode=0 Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.375369 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqw45" event={"ID":"8557daa0-d032-4ce3-845b-2ff667b49c7a","Type":"ContainerDied","Data":"14b1690e6be58945815d2b583a94eb3e93557c1bd32cc470a5a63069f162fd95"} Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.375416 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqw45" event={"ID":"8557daa0-d032-4ce3-845b-2ff667b49c7a","Type":"ContainerStarted","Data":"47d663531a0f5ed84d62ae544e8531c99249a7efc278c7b8935c19a9a1de2a48"} Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.517238 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.524033 4803 scope.go:117] "RemoveContainer" containerID="3b62c9e9d1049e90333341856cdf9135fc3721b0df09dc45acd7a1a1c2f348ce" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.535730 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6757ddbf5c-pprm6" podUID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.199:9696/\": dial tcp 10.217.0.199:9696: connect: connection refused" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.538119 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.547676 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 22:11:48 crc kubenswrapper[4803]: E0127 22:11:48.548274 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f3ea29-9273-4b38-8f97-0821042ab7fc" containerName="cinder-api" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.548300 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f3ea29-9273-4b38-8f97-0821042ab7fc" containerName="cinder-api" Jan 27 22:11:48 crc kubenswrapper[4803]: E0127 22:11:48.548338 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f3ea29-9273-4b38-8f97-0821042ab7fc" containerName="cinder-api-log" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.548347 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f3ea29-9273-4b38-8f97-0821042ab7fc" containerName="cinder-api-log" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.548591 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f3ea29-9273-4b38-8f97-0821042ab7fc" containerName="cinder-api-log" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.548627 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f3ea29-9273-4b38-8f97-0821042ab7fc" containerName="cinder-api" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.550221 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.555440 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.555496 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.555660 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.583400 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.644812 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-scripts\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.644905 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.645027 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.645048 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69038d7c-7d07-4b92-a041-c27addfb7fba-logs\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.645078 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69038d7c-7d07-4b92-a041-c27addfb7fba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.645186 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-config-data-custom\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.645287 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-public-tls-certs\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.645321 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfljc\" (UniqueName: \"kubernetes.io/projected/69038d7c-7d07-4b92-a041-c27addfb7fba-kube-api-access-dfljc\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.645343 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-config-data\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.747739 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.747795 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69038d7c-7d07-4b92-a041-c27addfb7fba-logs\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.747839 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69038d7c-7d07-4b92-a041-c27addfb7fba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.747878 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-config-data-custom\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.747946 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-public-tls-certs\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.747969 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfljc\" (UniqueName: \"kubernetes.io/projected/69038d7c-7d07-4b92-a041-c27addfb7fba-kube-api-access-dfljc\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.747988 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-config-data\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.748022 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-scripts\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.748053 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.748884 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69038d7c-7d07-4b92-a041-c27addfb7fba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.749223 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69038d7c-7d07-4b92-a041-c27addfb7fba-logs\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.759381 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.759889 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-config-data\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.763299 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-scripts\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.763420 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-config-data-custom\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.763821 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.769708 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69038d7c-7d07-4b92-a041-c27addfb7fba-public-tls-certs\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.789979 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfljc\" (UniqueName: \"kubernetes.io/projected/69038d7c-7d07-4b92-a041-c27addfb7fba-kube-api-access-dfljc\") pod \"cinder-api-0\" (UID: \"69038d7c-7d07-4b92-a041-c27addfb7fba\") " pod="openstack/cinder-api-0" Jan 27 22:11:48 crc kubenswrapper[4803]: I0127 22:11:48.887419 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 22:11:49 crc kubenswrapper[4803]: I0127 22:11:49.408161 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f556d549c-2bkn4" event={"ID":"9d70e5d4-03d3-451e-9ef4-8f88d42a015c","Type":"ContainerStarted","Data":"0eb9f6004bfd850ab6eb4518879f2bc17cffaa9cf8776ff70c104d393f095c62"} Jan 27 22:11:49 crc kubenswrapper[4803]: I0127 22:11:49.408505 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:11:49 crc kubenswrapper[4803]: I0127 22:11:49.438655 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqw45" event={"ID":"8557daa0-d032-4ce3-845b-2ff667b49c7a","Type":"ContainerStarted","Data":"4cb4af0f7644e519d14707c2a04583f119460bee19bc289a4e25cded524d7e4d"} Jan 27 22:11:49 crc kubenswrapper[4803]: I0127 22:11:49.447033 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7f556d549c-2bkn4" podStartSLOduration=3.447010289 podStartE2EDuration="3.447010289s" podCreationTimestamp="2026-01-27 22:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:49.437103481 +0000 UTC m=+1461.853125190" watchObservedRunningTime="2026-01-27 22:11:49.447010289 +0000 UTC m=+1461.863031988" Jan 27 22:11:49 crc kubenswrapper[4803]: I0127 22:11:49.454319 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e408c3-f14c-4cad-a5b5-24d601fcb8d8","Type":"ContainerStarted","Data":"323088ee81b8caa75c358286d925cf084a7b7daeb9a79f5acb193a7351343998"} Jan 27 22:11:49 crc kubenswrapper[4803]: I0127 22:11:49.517567 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 22:11:50 crc kubenswrapper[4803]: I0127 22:11:50.336485 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f3ea29-9273-4b38-8f97-0821042ab7fc" path="/var/lib/kubelet/pods/53f3ea29-9273-4b38-8f97-0821042ab7fc/volumes" Jan 27 22:11:50 crc kubenswrapper[4803]: I0127 22:11:50.492787 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69038d7c-7d07-4b92-a041-c27addfb7fba","Type":"ContainerStarted","Data":"37ef27dd66f8193fa9da2741778f7a984121abec018fe0b31079def24b3fdc76"} Jan 27 22:11:50 crc kubenswrapper[4803]: I0127 22:11:50.492841 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69038d7c-7d07-4b92-a041-c27addfb7fba","Type":"ContainerStarted","Data":"9b632a07c1229bf41064256cf1000928f27889653b8f825d8aaae7191bd7a277"} Jan 27 22:11:50 crc kubenswrapper[4803]: I0127 22:11:50.937740 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:50 crc kubenswrapper[4803]: I0127 22:11:50.971609 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:51 crc kubenswrapper[4803]: I0127 22:11:51.510887 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e408c3-f14c-4cad-a5b5-24d601fcb8d8","Type":"ContainerStarted","Data":"d5cc94aeb87b89702edc081e30aa41d555d46927fd7222bac060e1b766b8e01e"} Jan 27 22:11:51 crc kubenswrapper[4803]: I0127 22:11:51.514460 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69038d7c-7d07-4b92-a041-c27addfb7fba","Type":"ContainerStarted","Data":"f85d17bfd2ef01b78a4892ceed36a67ebf84dd8a577420b86bfd5d14b66f0f32"} Jan 27 22:11:51 crc kubenswrapper[4803]: I0127 22:11:51.514688 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 22:11:51 crc kubenswrapper[4803]: I0127 22:11:51.534743 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.66384447 podStartE2EDuration="7.534724886s" podCreationTimestamp="2026-01-27 22:11:44 +0000 UTC" firstStartedPulling="2026-01-27 22:11:45.35793733 +0000 UTC m=+1457.773959029" lastFinishedPulling="2026-01-27 22:11:50.228817746 +0000 UTC m=+1462.644839445" observedRunningTime="2026-01-27 22:11:51.531743936 +0000 UTC m=+1463.947765645" watchObservedRunningTime="2026-01-27 22:11:51.534724886 +0000 UTC m=+1463.950746585" Jan 27 22:11:51 crc kubenswrapper[4803]: I0127 22:11:51.559968 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.559949616 podStartE2EDuration="3.559949616s" podCreationTimestamp="2026-01-27 22:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:51.549898955 +0000 UTC m=+1463.965920654" watchObservedRunningTime="2026-01-27 22:11:51.559949616 +0000 UTC m=+1463.975971325" Jan 27 22:11:51 crc kubenswrapper[4803]: I0127 22:11:51.742133 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:11:51 crc kubenswrapper[4803]: I0127 22:11:51.820091 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rvh2q"] Jan 27 22:11:51 crc kubenswrapper[4803]: I0127 22:11:51.820830 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" podUID="2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" containerName="dnsmasq-dns" containerID="cri-o://24ac39c76b0dea6fb0a0ce7aa891496d6b726a67b25ec2d1d71a2d8e1f5e25ea" gracePeriod=10 Jan 27 22:11:52 crc kubenswrapper[4803]: I0127 22:11:52.056191 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 22:11:52 crc kubenswrapper[4803]: I0127 22:11:52.112207 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 22:11:52 crc kubenswrapper[4803]: I0127 22:11:52.556094 4803 generic.go:334] "Generic (PLEG): container finished" podID="2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" containerID="24ac39c76b0dea6fb0a0ce7aa891496d6b726a67b25ec2d1d71a2d8e1f5e25ea" exitCode=0 Jan 27 22:11:52 crc kubenswrapper[4803]: I0127 22:11:52.556453 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" event={"ID":"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c","Type":"ContainerDied","Data":"24ac39c76b0dea6fb0a0ce7aa891496d6b726a67b25ec2d1d71a2d8e1f5e25ea"} Jan 27 22:11:52 crc kubenswrapper[4803]: I0127 22:11:52.559995 4803 generic.go:334] "Generic (PLEG): container finished" podID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerID="4cb4af0f7644e519d14707c2a04583f119460bee19bc289a4e25cded524d7e4d" exitCode=0 Jan 27 22:11:52 crc kubenswrapper[4803]: I0127 22:11:52.560048 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqw45" event={"ID":"8557daa0-d032-4ce3-845b-2ff667b49c7a","Type":"ContainerDied","Data":"4cb4af0f7644e519d14707c2a04583f119460bee19bc289a4e25cded524d7e4d"} Jan 27 22:11:52 crc kubenswrapper[4803]: I0127 22:11:52.583554 4803 generic.go:334] "Generic (PLEG): container finished" podID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerID="78568fb3db5c74fd564077b566b910d6edb0c0f3c55607d46b0f159f38873b29" exitCode=0 Jan 27 22:11:52 crc kubenswrapper[4803]: I0127 22:11:52.584903 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6757ddbf5c-pprm6" event={"ID":"f4a1a8ca-af9c-47d3-82a6-1ce97b165924","Type":"ContainerDied","Data":"78568fb3db5c74fd564077b566b910d6edb0c0f3c55607d46b0f159f38873b29"} Jan 27 22:11:52 crc kubenswrapper[4803]: I0127 22:11:52.585007 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 22:11:52 crc kubenswrapper[4803]: I0127 22:11:52.586244 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="01856d15-d761-4a96-9c28-8de6b7a980e8" containerName="cinder-scheduler" containerID="cri-o://0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4" gracePeriod=30 Jan 27 22:11:52 crc kubenswrapper[4803]: I0127 22:11:52.586529 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="01856d15-d761-4a96-9c28-8de6b7a980e8" containerName="probe" containerID="cri-o://fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26" gracePeriod=30 Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.100080 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.196199 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-sb\") pod \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.196483 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-nb\") pod \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.196526 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-svc\") pod \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.196624 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-config\") pod \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.196685 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kl7v\" (UniqueName: \"kubernetes.io/projected/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-kube-api-access-9kl7v\") pod \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.196741 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-swift-storage-0\") pod \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\" (UID: \"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.229191 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-kube-api-access-9kl7v" (OuterVolumeSpecName: "kube-api-access-9kl7v") pod "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" (UID: "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c"). InnerVolumeSpecName "kube-api-access-9kl7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.289577 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" (UID: "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.299958 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kl7v\" (UniqueName: \"kubernetes.io/projected/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-kube-api-access-9kl7v\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.299996 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.311087 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" (UID: "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.348803 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" (UID: "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.375371 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" (UID: "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.385653 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.394016 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-config" (OuterVolumeSpecName: "config") pod "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" (UID: "2b06bd9a-2a7b-4a6e-aa14-4f58d642717c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.405084 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.405283 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.405366 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.405444 4803 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.507058 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-public-tls-certs\") pod \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.507409 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-internal-tls-certs\") pod \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.507497 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-config\") pod \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.507533 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-httpd-config\") pod \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.507560 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2p45\" (UniqueName: \"kubernetes.io/projected/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-kube-api-access-z2p45\") pod \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.507648 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-combined-ca-bundle\") pod \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.507734 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-ovndb-tls-certs\") pod \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\" (UID: \"f4a1a8ca-af9c-47d3-82a6-1ce97b165924\") " Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.514071 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-kube-api-access-z2p45" (OuterVolumeSpecName: "kube-api-access-z2p45") pod "f4a1a8ca-af9c-47d3-82a6-1ce97b165924" (UID: "f4a1a8ca-af9c-47d3-82a6-1ce97b165924"). InnerVolumeSpecName "kube-api-access-z2p45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.520754 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f4a1a8ca-af9c-47d3-82a6-1ce97b165924" (UID: "f4a1a8ca-af9c-47d3-82a6-1ce97b165924"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.588025 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f4a1a8ca-af9c-47d3-82a6-1ce97b165924" (UID: "f4a1a8ca-af9c-47d3-82a6-1ce97b165924"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.591459 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-config" (OuterVolumeSpecName: "config") pod "f4a1a8ca-af9c-47d3-82a6-1ce97b165924" (UID: "f4a1a8ca-af9c-47d3-82a6-1ce97b165924"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.594007 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f4a1a8ca-af9c-47d3-82a6-1ce97b165924" (UID: "f4a1a8ca-af9c-47d3-82a6-1ce97b165924"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.599545 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" event={"ID":"2b06bd9a-2a7b-4a6e-aa14-4f58d642717c","Type":"ContainerDied","Data":"14a47211dd76d7b8876f0289007dea018127628db4923357da5f0f78d90bd2cd"} Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.599595 4803 scope.go:117] "RemoveContainer" containerID="24ac39c76b0dea6fb0a0ce7aa891496d6b726a67b25ec2d1d71a2d8e1f5e25ea" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.600092 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-rvh2q" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.605309 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4a1a8ca-af9c-47d3-82a6-1ce97b165924" (UID: "f4a1a8ca-af9c-47d3-82a6-1ce97b165924"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.608636 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqw45" event={"ID":"8557daa0-d032-4ce3-845b-2ff667b49c7a","Type":"ContainerStarted","Data":"5bb86a95edfba57003c69da0086cf3fa56e31c9d3ac1d6d219b514f2fd1e46f6"} Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.611290 4803 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.611317 4803 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.611349 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.611364 4803 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.611376 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2p45\" (UniqueName: \"kubernetes.io/projected/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-kube-api-access-z2p45\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.611390 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.613616 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6757ddbf5c-pprm6" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.613624 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6757ddbf5c-pprm6" event={"ID":"f4a1a8ca-af9c-47d3-82a6-1ce97b165924","Type":"ContainerDied","Data":"c6439faa79cf9076957d3be899fa35ae6ca71a0e07b9ea7883aeba2887389ecc"} Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.616551 4803 generic.go:334] "Generic (PLEG): container finished" podID="01856d15-d761-4a96-9c28-8de6b7a980e8" containerID="0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4" exitCode=0 Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.617235 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01856d15-d761-4a96-9c28-8de6b7a980e8","Type":"ContainerDied","Data":"0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4"} Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.623361 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f4a1a8ca-af9c-47d3-82a6-1ce97b165924" (UID: "f4a1a8ca-af9c-47d3-82a6-1ce97b165924"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.654995 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jqw45" podStartSLOduration=2.906084952 podStartE2EDuration="7.654971831s" podCreationTimestamp="2026-01-27 22:11:46 +0000 UTC" firstStartedPulling="2026-01-27 22:11:48.384049204 +0000 UTC m=+1460.800070903" lastFinishedPulling="2026-01-27 22:11:53.132936083 +0000 UTC m=+1465.548957782" observedRunningTime="2026-01-27 22:11:53.631232021 +0000 UTC m=+1466.047253720" watchObservedRunningTime="2026-01-27 22:11:53.654971831 +0000 UTC m=+1466.070993520" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.660974 4803 scope.go:117] "RemoveContainer" containerID="359f66355a0df0762e8d92b57360fe5b05969ebdc795121a046c141f783e1cd3" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.665240 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rvh2q"] Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.675097 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rvh2q"] Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.698079 4803 scope.go:117] "RemoveContainer" containerID="69eec6eafa07ee07a80b49fcc5b45fd29e0818680f57443cc5152c5e9613a0e8" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.714304 4803 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a8ca-af9c-47d3-82a6-1ce97b165924-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.727328 4803 scope.go:117] "RemoveContainer" containerID="78568fb3db5c74fd564077b566b910d6edb0c0f3c55607d46b0f159f38873b29" Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.950890 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6757ddbf5c-pprm6"] Jan 27 22:11:53 crc kubenswrapper[4803]: I0127 22:11:53.961053 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6757ddbf5c-pprm6"] Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.076214 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.224513 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data-custom\") pod \"01856d15-d761-4a96-9c28-8de6b7a980e8\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.224611 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data\") pod \"01856d15-d761-4a96-9c28-8de6b7a980e8\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.224682 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-scripts\") pod \"01856d15-d761-4a96-9c28-8de6b7a980e8\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.224791 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-combined-ca-bundle\") pod \"01856d15-d761-4a96-9c28-8de6b7a980e8\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.224873 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01856d15-d761-4a96-9c28-8de6b7a980e8-etc-machine-id\") pod \"01856d15-d761-4a96-9c28-8de6b7a980e8\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.224898 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kn5s\" (UniqueName: \"kubernetes.io/projected/01856d15-d761-4a96-9c28-8de6b7a980e8-kube-api-access-5kn5s\") pod \"01856d15-d761-4a96-9c28-8de6b7a980e8\" (UID: \"01856d15-d761-4a96-9c28-8de6b7a980e8\") " Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.228118 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01856d15-d761-4a96-9c28-8de6b7a980e8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "01856d15-d761-4a96-9c28-8de6b7a980e8" (UID: "01856d15-d761-4a96-9c28-8de6b7a980e8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.231392 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-scripts" (OuterVolumeSpecName: "scripts") pod "01856d15-d761-4a96-9c28-8de6b7a980e8" (UID: "01856d15-d761-4a96-9c28-8de6b7a980e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.251008 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01856d15-d761-4a96-9c28-8de6b7a980e8-kube-api-access-5kn5s" (OuterVolumeSpecName: "kube-api-access-5kn5s") pod "01856d15-d761-4a96-9c28-8de6b7a980e8" (UID: "01856d15-d761-4a96-9c28-8de6b7a980e8"). InnerVolumeSpecName "kube-api-access-5kn5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.265016 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "01856d15-d761-4a96-9c28-8de6b7a980e8" (UID: "01856d15-d761-4a96-9c28-8de6b7a980e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.319113 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01856d15-d761-4a96-9c28-8de6b7a980e8" (UID: "01856d15-d761-4a96-9c28-8de6b7a980e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.329602 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.329643 4803 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01856d15-d761-4a96-9c28-8de6b7a980e8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.329658 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kn5s\" (UniqueName: \"kubernetes.io/projected/01856d15-d761-4a96-9c28-8de6b7a980e8-kube-api-access-5kn5s\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.329670 4803 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.329683 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.330322 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" path="/var/lib/kubelet/pods/2b06bd9a-2a7b-4a6e-aa14-4f58d642717c/volumes" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.331441 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" path="/var/lib/kubelet/pods/f4a1a8ca-af9c-47d3-82a6-1ce97b165924/volumes" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.394019 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data" (OuterVolumeSpecName: "config-data") pod "01856d15-d761-4a96-9c28-8de6b7a980e8" (UID: "01856d15-d761-4a96-9c28-8de6b7a980e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.431341 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01856d15-d761-4a96-9c28-8de6b7a980e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.628883 4803 generic.go:334] "Generic (PLEG): container finished" podID="01856d15-d761-4a96-9c28-8de6b7a980e8" containerID="fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26" exitCode=0 Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.628982 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01856d15-d761-4a96-9c28-8de6b7a980e8","Type":"ContainerDied","Data":"fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26"} Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.629054 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01856d15-d761-4a96-9c28-8de6b7a980e8","Type":"ContainerDied","Data":"0713cb66cc2c0d825bd5f31438519e249e8f68878f2039eaa3595b4d2abc5715"} Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.629074 4803 scope.go:117] "RemoveContainer" containerID="fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.629575 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.660378 4803 scope.go:117] "RemoveContainer" containerID="0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.680920 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.686622 4803 scope.go:117] "RemoveContainer" containerID="fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26" Jan 27 22:11:54 crc kubenswrapper[4803]: E0127 22:11:54.687143 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26\": container with ID starting with fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26 not found: ID does not exist" containerID="fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.687172 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26"} err="failed to get container status \"fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26\": rpc error: code = NotFound desc = could not find container \"fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26\": container with ID starting with fe1c08330e51ca818f0484390e59c717d320f3bdf0fb8acdb53d40f859585b26 not found: ID does not exist" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.687192 4803 scope.go:117] "RemoveContainer" containerID="0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4" Jan 27 22:11:54 crc kubenswrapper[4803]: E0127 22:11:54.690088 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4\": container with ID starting with 0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4 not found: ID does not exist" containerID="0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.690295 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4"} err="failed to get container status \"0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4\": rpc error: code = NotFound desc = could not find container \"0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4\": container with ID starting with 0bebfcb2bc33bc525a8d6556554985c9ef8b6741b69da71331055508efd094d4 not found: ID does not exist" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.703547 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739006 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 22:11:54 crc kubenswrapper[4803]: E0127 22:11:54.739419 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerName="neutron-httpd" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739448 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerName="neutron-httpd" Jan 27 22:11:54 crc kubenswrapper[4803]: E0127 22:11:54.739466 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01856d15-d761-4a96-9c28-8de6b7a980e8" containerName="probe" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739472 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="01856d15-d761-4a96-9c28-8de6b7a980e8" containerName="probe" Jan 27 22:11:54 crc kubenswrapper[4803]: E0127 22:11:54.739500 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" containerName="init" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739506 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" containerName="init" Jan 27 22:11:54 crc kubenswrapper[4803]: E0127 22:11:54.739527 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01856d15-d761-4a96-9c28-8de6b7a980e8" containerName="cinder-scheduler" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739534 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="01856d15-d761-4a96-9c28-8de6b7a980e8" containerName="cinder-scheduler" Jan 27 22:11:54 crc kubenswrapper[4803]: E0127 22:11:54.739549 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" containerName="dnsmasq-dns" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739555 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" containerName="dnsmasq-dns" Jan 27 22:11:54 crc kubenswrapper[4803]: E0127 22:11:54.739576 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerName="neutron-api" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739590 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerName="neutron-api" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739787 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="01856d15-d761-4a96-9c28-8de6b7a980e8" containerName="probe" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739801 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerName="neutron-api" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739809 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b06bd9a-2a7b-4a6e-aa14-4f58d642717c" containerName="dnsmasq-dns" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739818 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a1a8ca-af9c-47d3-82a6-1ce97b165924" containerName="neutron-httpd" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.739830 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="01856d15-d761-4a96-9c28-8de6b7a980e8" containerName="cinder-scheduler" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.741210 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.747230 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.757407 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.837603 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.837658 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-scripts\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.837737 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.837756 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.837886 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-config-data\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.837941 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5584n\" (UniqueName: \"kubernetes.io/projected/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-kube-api-access-5584n\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.939301 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-config-data\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.939380 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5584n\" (UniqueName: \"kubernetes.io/projected/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-kube-api-access-5584n\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.939434 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.939465 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-scripts\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.939508 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.939528 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.939808 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.944421 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.945100 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.945387 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-scripts\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.946381 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-config-data\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:54 crc kubenswrapper[4803]: I0127 22:11:54.958917 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5584n\" (UniqueName: \"kubernetes.io/projected/3427d6c9-1902-41c1-8b41-fa9f2cc92dc7-kube-api-access-5584n\") pod \"cinder-scheduler-0\" (UID: \"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7\") " pod="openstack/cinder-scheduler-0" Jan 27 22:11:55 crc kubenswrapper[4803]: I0127 22:11:55.010604 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:55 crc kubenswrapper[4803]: I0127 22:11:55.012271 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f74474d96-gcmdd" Jan 27 22:11:55 crc kubenswrapper[4803]: I0127 22:11:55.074650 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 22:11:55 crc kubenswrapper[4803]: I0127 22:11:55.081323 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-599bbf7fdb-qcdcv"] Jan 27 22:11:55 crc kubenswrapper[4803]: I0127 22:11:55.081648 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-599bbf7fdb-qcdcv" podUID="a40848f3-72e6-4de4-ac01-e68adec94fc2" containerName="barbican-api-log" containerID="cri-o://f3d49fc150b52ca36b05e5c5f96f6e9924ea37d3d3ee59d60abaeb92cd16709e" gracePeriod=30 Jan 27 22:11:55 crc kubenswrapper[4803]: I0127 22:11:55.081683 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-599bbf7fdb-qcdcv" podUID="a40848f3-72e6-4de4-ac01-e68adec94fc2" containerName="barbican-api" containerID="cri-o://af4f5e910378d94e6a1207127ee81bcd1053a61b73a41a5a651e7c092b1502e0" gracePeriod=30 Jan 27 22:11:55 crc kubenswrapper[4803]: I0127 22:11:55.650175 4803 generic.go:334] "Generic (PLEG): container finished" podID="a40848f3-72e6-4de4-ac01-e68adec94fc2" containerID="f3d49fc150b52ca36b05e5c5f96f6e9924ea37d3d3ee59d60abaeb92cd16709e" exitCode=143 Jan 27 22:11:55 crc kubenswrapper[4803]: I0127 22:11:55.650268 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-599bbf7fdb-qcdcv" event={"ID":"a40848f3-72e6-4de4-ac01-e68adec94fc2","Type":"ContainerDied","Data":"f3d49fc150b52ca36b05e5c5f96f6e9924ea37d3d3ee59d60abaeb92cd16709e"} Jan 27 22:11:55 crc kubenswrapper[4803]: I0127 22:11:55.654452 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 22:11:55 crc kubenswrapper[4803]: W0127 22:11:55.661994 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3427d6c9_1902_41c1_8b41_fa9f2cc92dc7.slice/crio-6fd25f4a781b5ec1c007bc53a122f97d96ef83b537737f6aee4e7d25c717948d WatchSource:0}: Error finding container 6fd25f4a781b5ec1c007bc53a122f97d96ef83b537737f6aee4e7d25c717948d: Status 404 returned error can't find the container with id 6fd25f4a781b5ec1c007bc53a122f97d96ef83b537737f6aee4e7d25c717948d Jan 27 22:11:56 crc kubenswrapper[4803]: I0127 22:11:56.321243 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01856d15-d761-4a96-9c28-8de6b7a980e8" path="/var/lib/kubelet/pods/01856d15-d761-4a96-9c28-8de6b7a980e8/volumes" Jan 27 22:11:56 crc kubenswrapper[4803]: I0127 22:11:56.539138 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:56 crc kubenswrapper[4803]: I0127 22:11:56.539207 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:11:56 crc kubenswrapper[4803]: I0127 22:11:56.695874 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7","Type":"ContainerStarted","Data":"e4447e5dbe20b2f3719136a7f97068001abb3a38ede778b798104196088ed509"} Jan 27 22:11:56 crc kubenswrapper[4803]: I0127 22:11:56.696216 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7","Type":"ContainerStarted","Data":"6fd25f4a781b5ec1c007bc53a122f97d96ef83b537737f6aee4e7d25c717948d"} Jan 27 22:11:57 crc kubenswrapper[4803]: I0127 22:11:57.631794 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jqw45" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="registry-server" probeResult="failure" output=< Jan 27 22:11:57 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:11:57 crc kubenswrapper[4803]: > Jan 27 22:11:57 crc kubenswrapper[4803]: I0127 22:11:57.706719 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7","Type":"ContainerStarted","Data":"d1d4f6830bf99eaf6bfa5b6da8c036d8ee343c59538469ddaedafe5496144d7d"} Jan 27 22:11:57 crc kubenswrapper[4803]: I0127 22:11:57.750474 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.750451852 podStartE2EDuration="3.750451852s" podCreationTimestamp="2026-01-27 22:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:11:57.724738759 +0000 UTC m=+1470.140760468" watchObservedRunningTime="2026-01-27 22:11:57.750451852 +0000 UTC m=+1470.166473571" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.719150 4803 generic.go:334] "Generic (PLEG): container finished" podID="a40848f3-72e6-4de4-ac01-e68adec94fc2" containerID="af4f5e910378d94e6a1207127ee81bcd1053a61b73a41a5a651e7c092b1502e0" exitCode=0 Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.719277 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-599bbf7fdb-qcdcv" event={"ID":"a40848f3-72e6-4de4-ac01-e68adec94fc2","Type":"ContainerDied","Data":"af4f5e910378d94e6a1207127ee81bcd1053a61b73a41a5a651e7c092b1502e0"} Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.719338 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-599bbf7fdb-qcdcv" event={"ID":"a40848f3-72e6-4de4-ac01-e68adec94fc2","Type":"ContainerDied","Data":"87dfd14d910a85c7478063065122d032b05824471747f3c4f1f33176af698a8d"} Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.719353 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87dfd14d910a85c7478063065122d032b05824471747f3c4f1f33176af698a8d" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.753206 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.851483 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data\") pod \"a40848f3-72e6-4de4-ac01-e68adec94fc2\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.851609 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-combined-ca-bundle\") pod \"a40848f3-72e6-4de4-ac01-e68adec94fc2\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.851634 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40848f3-72e6-4de4-ac01-e68adec94fc2-logs\") pod \"a40848f3-72e6-4de4-ac01-e68adec94fc2\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.851660 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data-custom\") pod \"a40848f3-72e6-4de4-ac01-e68adec94fc2\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.851708 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc5nn\" (UniqueName: \"kubernetes.io/projected/a40848f3-72e6-4de4-ac01-e68adec94fc2-kube-api-access-sc5nn\") pod \"a40848f3-72e6-4de4-ac01-e68adec94fc2\" (UID: \"a40848f3-72e6-4de4-ac01-e68adec94fc2\") " Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.853766 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a40848f3-72e6-4de4-ac01-e68adec94fc2-logs" (OuterVolumeSpecName: "logs") pod "a40848f3-72e6-4de4-ac01-e68adec94fc2" (UID: "a40848f3-72e6-4de4-ac01-e68adec94fc2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.858659 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40848f3-72e6-4de4-ac01-e68adec94fc2-kube-api-access-sc5nn" (OuterVolumeSpecName: "kube-api-access-sc5nn") pod "a40848f3-72e6-4de4-ac01-e68adec94fc2" (UID: "a40848f3-72e6-4de4-ac01-e68adec94fc2"). InnerVolumeSpecName "kube-api-access-sc5nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.894867 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a40848f3-72e6-4de4-ac01-e68adec94fc2" (UID: "a40848f3-72e6-4de4-ac01-e68adec94fc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.902716 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a40848f3-72e6-4de4-ac01-e68adec94fc2" (UID: "a40848f3-72e6-4de4-ac01-e68adec94fc2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.951801 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data" (OuterVolumeSpecName: "config-data") pod "a40848f3-72e6-4de4-ac01-e68adec94fc2" (UID: "a40848f3-72e6-4de4-ac01-e68adec94fc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.954708 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc5nn\" (UniqueName: \"kubernetes.io/projected/a40848f3-72e6-4de4-ac01-e68adec94fc2-kube-api-access-sc5nn\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.954747 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.954760 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.954771 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40848f3-72e6-4de4-ac01-e68adec94fc2-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:58 crc kubenswrapper[4803]: I0127 22:11:58.954783 4803 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a40848f3-72e6-4de4-ac01-e68adec94fc2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 22:11:59 crc kubenswrapper[4803]: I0127 22:11:59.728444 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-599bbf7fdb-qcdcv" Jan 27 22:11:59 crc kubenswrapper[4803]: I0127 22:11:59.762283 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-599bbf7fdb-qcdcv"] Jan 27 22:11:59 crc kubenswrapper[4803]: I0127 22:11:59.772822 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-599bbf7fdb-qcdcv"] Jan 27 22:12:00 crc kubenswrapper[4803]: I0127 22:12:00.075462 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 22:12:00 crc kubenswrapper[4803]: I0127 22:12:00.206606 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:12:00 crc kubenswrapper[4803]: I0127 22:12:00.208816 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b8df6b68b-dmsbm" Jan 27 22:12:00 crc kubenswrapper[4803]: I0127 22:12:00.318758 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a40848f3-72e6-4de4-ac01-e68adec94fc2" path="/var/lib/kubelet/pods/a40848f3-72e6-4de4-ac01-e68adec94fc2/volumes" Jan 27 22:12:01 crc kubenswrapper[4803]: I0127 22:12:01.192304 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 22:12:01 crc kubenswrapper[4803]: I0127 22:12:01.282725 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-75677f8887-xwsk2" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.425202 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 22:12:02 crc kubenswrapper[4803]: E0127 22:12:02.426792 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40848f3-72e6-4de4-ac01-e68adec94fc2" containerName="barbican-api-log" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.427009 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40848f3-72e6-4de4-ac01-e68adec94fc2" containerName="barbican-api-log" Jan 27 22:12:02 crc kubenswrapper[4803]: E0127 22:12:02.427122 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40848f3-72e6-4de4-ac01-e68adec94fc2" containerName="barbican-api" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.427203 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40848f3-72e6-4de4-ac01-e68adec94fc2" containerName="barbican-api" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.427640 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40848f3-72e6-4de4-ac01-e68adec94fc2" containerName="barbican-api-log" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.427748 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40848f3-72e6-4de4-ac01-e68adec94fc2" containerName="barbican-api" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.428766 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.431622 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.432578 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-2b2jt" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.441290 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.448734 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.541348 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w579n\" (UniqueName: \"kubernetes.io/projected/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-kube-api-access-w579n\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.541588 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-openstack-config-secret\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.541830 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-openstack-config\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.541970 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.643260 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-openstack-config-secret\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.643455 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-openstack-config\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.644215 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-openstack-config\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.644348 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.644403 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w579n\" (UniqueName: \"kubernetes.io/projected/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-kube-api-access-w579n\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.650155 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.654022 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-openstack-config-secret\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.663787 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w579n\" (UniqueName: \"kubernetes.io/projected/70c0e109-5a8c-4c70-87a6-bc31ed1a001d-kube-api-access-w579n\") pod \"openstackclient\" (UID: \"70c0e109-5a8c-4c70-87a6-bc31ed1a001d\") " pod="openstack/openstackclient" Jan 27 22:12:02 crc kubenswrapper[4803]: I0127 22:12:02.760868 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 22:12:03 crc kubenswrapper[4803]: W0127 22:12:03.308479 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70c0e109_5a8c_4c70_87a6_bc31ed1a001d.slice/crio-3bfe71d157ec9cd3ee91ae8aaf82847ce1dc08a3ffa2b9a655c075af8d86c49c WatchSource:0}: Error finding container 3bfe71d157ec9cd3ee91ae8aaf82847ce1dc08a3ffa2b9a655c075af8d86c49c: Status 404 returned error can't find the container with id 3bfe71d157ec9cd3ee91ae8aaf82847ce1dc08a3ffa2b9a655c075af8d86c49c Jan 27 22:12:03 crc kubenswrapper[4803]: I0127 22:12:03.312590 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 22:12:03 crc kubenswrapper[4803]: I0127 22:12:03.771238 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"70c0e109-5a8c-4c70-87a6-bc31ed1a001d","Type":"ContainerStarted","Data":"3bfe71d157ec9cd3ee91ae8aaf82847ce1dc08a3ffa2b9a655c075af8d86c49c"} Jan 27 22:12:05 crc kubenswrapper[4803]: I0127 22:12:05.291138 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.139807 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.141464 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8215d5aa-a30a-4a03-8058-509b5d04b261" containerName="glance-log" containerID="cri-o://ef4cf056b8f9b84ecba3e9ad2a548f23e4339dc78dcb5d24c92c6a7502b9af85" gracePeriod=30 Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.141558 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8215d5aa-a30a-4a03-8058-509b5d04b261" containerName="glance-httpd" containerID="cri-o://358f275c56eb86e806cfe67db4dc7828a1452e7f59367d2056a388ab1dbad289" gracePeriod=30 Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.594597 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jqw45" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="registry-server" probeResult="failure" output=< Jan 27 22:12:07 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:12:07 crc kubenswrapper[4803]: > Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.820289 4803 generic.go:334] "Generic (PLEG): container finished" podID="8215d5aa-a30a-4a03-8058-509b5d04b261" containerID="ef4cf056b8f9b84ecba3e9ad2a548f23e4339dc78dcb5d24c92c6a7502b9af85" exitCode=143 Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.820331 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8215d5aa-a30a-4a03-8058-509b5d04b261","Type":"ContainerDied","Data":"ef4cf056b8f9b84ecba3e9ad2a548f23e4339dc78dcb5d24c92c6a7502b9af85"} Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.899988 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-54c764888c-dpmfw"] Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.902433 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.906915 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.907101 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.907258 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.926286 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-54c764888c-dpmfw"] Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.976648 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/912aaad5-2b5b-431b-821f-0ba813a0faaf-etc-swift\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.976781 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktbtr\" (UniqueName: \"kubernetes.io/projected/912aaad5-2b5b-431b-821f-0ba813a0faaf-kube-api-access-ktbtr\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.976885 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-internal-tls-certs\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.976964 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-combined-ca-bundle\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.977058 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-public-tls-certs\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.977093 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-config-data\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.977171 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/912aaad5-2b5b-431b-821f-0ba813a0faaf-run-httpd\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:07 crc kubenswrapper[4803]: I0127 22:12:07.977213 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/912aaad5-2b5b-431b-821f-0ba813a0faaf-log-httpd\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.083490 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/912aaad5-2b5b-431b-821f-0ba813a0faaf-etc-swift\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.083573 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktbtr\" (UniqueName: \"kubernetes.io/projected/912aaad5-2b5b-431b-821f-0ba813a0faaf-kube-api-access-ktbtr\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.083622 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-internal-tls-certs\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.083681 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-combined-ca-bundle\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.083726 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-public-tls-certs\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.083778 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-config-data\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.083901 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/912aaad5-2b5b-431b-821f-0ba813a0faaf-run-httpd\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.083968 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/912aaad5-2b5b-431b-821f-0ba813a0faaf-log-httpd\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.085003 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/912aaad5-2b5b-431b-821f-0ba813a0faaf-log-httpd\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.089086 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/912aaad5-2b5b-431b-821f-0ba813a0faaf-run-httpd\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.091593 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/912aaad5-2b5b-431b-821f-0ba813a0faaf-etc-swift\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.092074 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-combined-ca-bundle\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.094275 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-internal-tls-certs\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.095387 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-public-tls-certs\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.096105 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/912aaad5-2b5b-431b-821f-0ba813a0faaf-config-data\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.102154 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktbtr\" (UniqueName: \"kubernetes.io/projected/912aaad5-2b5b-431b-821f-0ba813a0faaf-kube-api-access-ktbtr\") pod \"swift-proxy-54c764888c-dpmfw\" (UID: \"912aaad5-2b5b-431b-821f-0ba813a0faaf\") " pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.237331 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.545217 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ffj48"] Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.546990 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ffj48" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.558048 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ffj48"] Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.601556 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488bf67e-5edf-45f8-8ac9-a12e75646525-operator-scripts\") pod \"nova-api-db-create-ffj48\" (UID: \"488bf67e-5edf-45f8-8ac9-a12e75646525\") " pod="openstack/nova-api-db-create-ffj48" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.601794 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bgkt\" (UniqueName: \"kubernetes.io/projected/488bf67e-5edf-45f8-8ac9-a12e75646525-kube-api-access-6bgkt\") pod \"nova-api-db-create-ffj48\" (UID: \"488bf67e-5edf-45f8-8ac9-a12e75646525\") " pod="openstack/nova-api-db-create-ffj48" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.649284 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-29d72"] Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.650713 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-29d72" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.663467 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-29d72"] Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.704168 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bgkt\" (UniqueName: \"kubernetes.io/projected/488bf67e-5edf-45f8-8ac9-a12e75646525-kube-api-access-6bgkt\") pod \"nova-api-db-create-ffj48\" (UID: \"488bf67e-5edf-45f8-8ac9-a12e75646525\") " pod="openstack/nova-api-db-create-ffj48" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.704231 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-operator-scripts\") pod \"nova-cell0-db-create-29d72\" (UID: \"4c5ddc4c-65f5-4b87-b30c-6c63031f8826\") " pod="openstack/nova-cell0-db-create-29d72" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.704302 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlzq9\" (UniqueName: \"kubernetes.io/projected/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-kube-api-access-zlzq9\") pod \"nova-cell0-db-create-29d72\" (UID: \"4c5ddc4c-65f5-4b87-b30c-6c63031f8826\") " pod="openstack/nova-cell0-db-create-29d72" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.704353 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488bf67e-5edf-45f8-8ac9-a12e75646525-operator-scripts\") pod \"nova-api-db-create-ffj48\" (UID: \"488bf67e-5edf-45f8-8ac9-a12e75646525\") " pod="openstack/nova-api-db-create-ffj48" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.706510 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488bf67e-5edf-45f8-8ac9-a12e75646525-operator-scripts\") pod \"nova-api-db-create-ffj48\" (UID: \"488bf67e-5edf-45f8-8ac9-a12e75646525\") " pod="openstack/nova-api-db-create-ffj48" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.709511 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9b97-account-create-update-5fk5w"] Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.711314 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9b97-account-create-update-5fk5w" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.713544 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.734417 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9b97-account-create-update-5fk5w"] Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.741624 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bgkt\" (UniqueName: \"kubernetes.io/projected/488bf67e-5edf-45f8-8ac9-a12e75646525-kube-api-access-6bgkt\") pod \"nova-api-db-create-ffj48\" (UID: \"488bf67e-5edf-45f8-8ac9-a12e75646525\") " pod="openstack/nova-api-db-create-ffj48" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.772800 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-n86x7"] Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.774834 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n86x7" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.787944 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-n86x7"] Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.806398 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a043d332-9921-4219-9ad6-12e0cb2e31b9-operator-scripts\") pod \"nova-cell1-db-create-n86x7\" (UID: \"a043d332-9921-4219-9ad6-12e0cb2e31b9\") " pod="openstack/nova-cell1-db-create-n86x7" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.806489 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt26g\" (UniqueName: \"kubernetes.io/projected/26f931c2-83c8-4d1a-88ff-4483d4aba42d-kube-api-access-qt26g\") pod \"nova-api-9b97-account-create-update-5fk5w\" (UID: \"26f931c2-83c8-4d1a-88ff-4483d4aba42d\") " pod="openstack/nova-api-9b97-account-create-update-5fk5w" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.806570 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-operator-scripts\") pod \"nova-cell0-db-create-29d72\" (UID: \"4c5ddc4c-65f5-4b87-b30c-6c63031f8826\") " pod="openstack/nova-cell0-db-create-29d72" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.806607 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f931c2-83c8-4d1a-88ff-4483d4aba42d-operator-scripts\") pod \"nova-api-9b97-account-create-update-5fk5w\" (UID: \"26f931c2-83c8-4d1a-88ff-4483d4aba42d\") " pod="openstack/nova-api-9b97-account-create-update-5fk5w" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.806644 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlzq9\" (UniqueName: \"kubernetes.io/projected/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-kube-api-access-zlzq9\") pod \"nova-cell0-db-create-29d72\" (UID: \"4c5ddc4c-65f5-4b87-b30c-6c63031f8826\") " pod="openstack/nova-cell0-db-create-29d72" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.806687 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k7mw\" (UniqueName: \"kubernetes.io/projected/a043d332-9921-4219-9ad6-12e0cb2e31b9-kube-api-access-2k7mw\") pod \"nova-cell1-db-create-n86x7\" (UID: \"a043d332-9921-4219-9ad6-12e0cb2e31b9\") " pod="openstack/nova-cell1-db-create-n86x7" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.807361 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-operator-scripts\") pod \"nova-cell0-db-create-29d72\" (UID: \"4c5ddc4c-65f5-4b87-b30c-6c63031f8826\") " pod="openstack/nova-cell0-db-create-29d72" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.855004 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlzq9\" (UniqueName: \"kubernetes.io/projected/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-kube-api-access-zlzq9\") pod \"nova-cell0-db-create-29d72\" (UID: \"4c5ddc4c-65f5-4b87-b30c-6c63031f8826\") " pod="openstack/nova-cell0-db-create-29d72" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.874356 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ffj48" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.904933 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-821d-account-create-update-6bmpn"] Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.906556 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-821d-account-create-update-6bmpn" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.908409 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt26g\" (UniqueName: \"kubernetes.io/projected/26f931c2-83c8-4d1a-88ff-4483d4aba42d-kube-api-access-qt26g\") pod \"nova-api-9b97-account-create-update-5fk5w\" (UID: \"26f931c2-83c8-4d1a-88ff-4483d4aba42d\") " pod="openstack/nova-api-9b97-account-create-update-5fk5w" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.908635 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f931c2-83c8-4d1a-88ff-4483d4aba42d-operator-scripts\") pod \"nova-api-9b97-account-create-update-5fk5w\" (UID: \"26f931c2-83c8-4d1a-88ff-4483d4aba42d\") " pod="openstack/nova-api-9b97-account-create-update-5fk5w" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.908776 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k7mw\" (UniqueName: \"kubernetes.io/projected/a043d332-9921-4219-9ad6-12e0cb2e31b9-kube-api-access-2k7mw\") pod \"nova-cell1-db-create-n86x7\" (UID: \"a043d332-9921-4219-9ad6-12e0cb2e31b9\") " pod="openstack/nova-cell1-db-create-n86x7" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.908895 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a043d332-9921-4219-9ad6-12e0cb2e31b9-operator-scripts\") pod \"nova-cell1-db-create-n86x7\" (UID: \"a043d332-9921-4219-9ad6-12e0cb2e31b9\") " pod="openstack/nova-cell1-db-create-n86x7" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.909679 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a043d332-9921-4219-9ad6-12e0cb2e31b9-operator-scripts\") pod \"nova-cell1-db-create-n86x7\" (UID: \"a043d332-9921-4219-9ad6-12e0cb2e31b9\") " pod="openstack/nova-cell1-db-create-n86x7" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.911234 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f931c2-83c8-4d1a-88ff-4483d4aba42d-operator-scripts\") pod \"nova-api-9b97-account-create-update-5fk5w\" (UID: \"26f931c2-83c8-4d1a-88ff-4483d4aba42d\") " pod="openstack/nova-api-9b97-account-create-update-5fk5w" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.913221 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.951310 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k7mw\" (UniqueName: \"kubernetes.io/projected/a043d332-9921-4219-9ad6-12e0cb2e31b9-kube-api-access-2k7mw\") pod \"nova-cell1-db-create-n86x7\" (UID: \"a043d332-9921-4219-9ad6-12e0cb2e31b9\") " pod="openstack/nova-cell1-db-create-n86x7" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.951783 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt26g\" (UniqueName: \"kubernetes.io/projected/26f931c2-83c8-4d1a-88ff-4483d4aba42d-kube-api-access-qt26g\") pod \"nova-api-9b97-account-create-update-5fk5w\" (UID: \"26f931c2-83c8-4d1a-88ff-4483d4aba42d\") " pod="openstack/nova-api-9b97-account-create-update-5fk5w" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.974149 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-29d72" Jan 27 22:12:08 crc kubenswrapper[4803]: I0127 22:12:08.978953 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-821d-account-create-update-6bmpn"] Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.020528 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7srv\" (UniqueName: \"kubernetes.io/projected/4be52911-e65b-41f4-b207-efc49bc308d9-kube-api-access-f7srv\") pod \"nova-cell0-821d-account-create-update-6bmpn\" (UID: \"4be52911-e65b-41f4-b207-efc49bc308d9\") " pod="openstack/nova-cell0-821d-account-create-update-6bmpn" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.025691 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be52911-e65b-41f4-b207-efc49bc308d9-operator-scripts\") pod \"nova-cell0-821d-account-create-update-6bmpn\" (UID: \"4be52911-e65b-41f4-b207-efc49bc308d9\") " pod="openstack/nova-cell0-821d-account-create-update-6bmpn" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.050364 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9b97-account-create-update-5fk5w" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.099485 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ffb8-account-create-update-wvcqj"] Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.100902 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.105217 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.111516 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ffb8-account-create-update-wvcqj"] Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.128059 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7srv\" (UniqueName: \"kubernetes.io/projected/4be52911-e65b-41f4-b207-efc49bc308d9-kube-api-access-f7srv\") pod \"nova-cell0-821d-account-create-update-6bmpn\" (UID: \"4be52911-e65b-41f4-b207-efc49bc308d9\") " pod="openstack/nova-cell0-821d-account-create-update-6bmpn" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.128181 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be52911-e65b-41f4-b207-efc49bc308d9-operator-scripts\") pod \"nova-cell0-821d-account-create-update-6bmpn\" (UID: \"4be52911-e65b-41f4-b207-efc49bc308d9\") " pod="openstack/nova-cell0-821d-account-create-update-6bmpn" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.128717 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n86x7" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.129021 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be52911-e65b-41f4-b207-efc49bc308d9-operator-scripts\") pod \"nova-cell0-821d-account-create-update-6bmpn\" (UID: \"4be52911-e65b-41f4-b207-efc49bc308d9\") " pod="openstack/nova-cell0-821d-account-create-update-6bmpn" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.155808 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7srv\" (UniqueName: \"kubernetes.io/projected/4be52911-e65b-41f4-b207-efc49bc308d9-kube-api-access-f7srv\") pod \"nova-cell0-821d-account-create-update-6bmpn\" (UID: \"4be52911-e65b-41f4-b207-efc49bc308d9\") " pod="openstack/nova-cell0-821d-account-create-update-6bmpn" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.234904 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-821d-account-create-update-6bmpn" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.234980 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szngf\" (UniqueName: \"kubernetes.io/projected/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-kube-api-access-szngf\") pod \"nova-cell1-ffb8-account-create-update-wvcqj\" (UID: \"7e4f1dd8-79ee-4832-9474-cabab5bc72e8\") " pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.235079 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-operator-scripts\") pod \"nova-cell1-ffb8-account-create-update-wvcqj\" (UID: \"7e4f1dd8-79ee-4832-9474-cabab5bc72e8\") " pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.329133 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.329446 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="ceilometer-central-agent" containerID="cri-o://6e1ccfd92241094c36c6597a9ca17f2e07201b2ffbb909421e10fcfd8b58d09f" gracePeriod=30 Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.329580 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="proxy-httpd" containerID="cri-o://d5cc94aeb87b89702edc081e30aa41d555d46927fd7222bac060e1b766b8e01e" gracePeriod=30 Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.329617 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="sg-core" containerID="cri-o://323088ee81b8caa75c358286d925cf084a7b7daeb9a79f5acb193a7351343998" gracePeriod=30 Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.329649 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="ceilometer-notification-agent" containerID="cri-o://d32abfec1dd9f35d0b6cd9609c2c5f37e4690190ccce6e6a5a29363e6fdaa8eb" gracePeriod=30 Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.338981 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szngf\" (UniqueName: \"kubernetes.io/projected/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-kube-api-access-szngf\") pod \"nova-cell1-ffb8-account-create-update-wvcqj\" (UID: \"7e4f1dd8-79ee-4832-9474-cabab5bc72e8\") " pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.339056 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-operator-scripts\") pod \"nova-cell1-ffb8-account-create-update-wvcqj\" (UID: \"7e4f1dd8-79ee-4832-9474-cabab5bc72e8\") " pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.340084 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-operator-scripts\") pod \"nova-cell1-ffb8-account-create-update-wvcqj\" (UID: \"7e4f1dd8-79ee-4832-9474-cabab5bc72e8\") " pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.343750 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.210:3000/\": EOF" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.360777 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szngf\" (UniqueName: \"kubernetes.io/projected/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-kube-api-access-szngf\") pod \"nova-cell1-ffb8-account-create-update-wvcqj\" (UID: \"7e4f1dd8-79ee-4832-9474-cabab5bc72e8\") " pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.442001 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.732624 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.733302 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2568e2db-68d1-49fc-a0fd-363e983d8b97" containerName="glance-log" containerID="cri-o://9a61044a78af284d7e7f3fe7776badd0c8ff2f8c2516d15226ffc4eaa2c4ec1b" gracePeriod=30 Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.733443 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2568e2db-68d1-49fc-a0fd-363e983d8b97" containerName="glance-httpd" containerID="cri-o://9314ccf5202326dc651edcf0da21dcead4e773b9b60ade52d9912a9d7c50270c" gracePeriod=30 Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.863311 4803 generic.go:334] "Generic (PLEG): container finished" podID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerID="d5cc94aeb87b89702edc081e30aa41d555d46927fd7222bac060e1b766b8e01e" exitCode=0 Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.863349 4803 generic.go:334] "Generic (PLEG): container finished" podID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerID="323088ee81b8caa75c358286d925cf084a7b7daeb9a79f5acb193a7351343998" exitCode=2 Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.863359 4803 generic.go:334] "Generic (PLEG): container finished" podID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerID="6e1ccfd92241094c36c6597a9ca17f2e07201b2ffbb909421e10fcfd8b58d09f" exitCode=0 Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.863407 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e408c3-f14c-4cad-a5b5-24d601fcb8d8","Type":"ContainerDied","Data":"d5cc94aeb87b89702edc081e30aa41d555d46927fd7222bac060e1b766b8e01e"} Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.863436 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e408c3-f14c-4cad-a5b5-24d601fcb8d8","Type":"ContainerDied","Data":"323088ee81b8caa75c358286d925cf084a7b7daeb9a79f5acb193a7351343998"} Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.863448 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e408c3-f14c-4cad-a5b5-24d601fcb8d8","Type":"ContainerDied","Data":"6e1ccfd92241094c36c6597a9ca17f2e07201b2ffbb909421e10fcfd8b58d09f"} Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.868932 4803 generic.go:334] "Generic (PLEG): container finished" podID="2568e2db-68d1-49fc-a0fd-363e983d8b97" containerID="9a61044a78af284d7e7f3fe7776badd0c8ff2f8c2516d15226ffc4eaa2c4ec1b" exitCode=143 Jan 27 22:12:09 crc kubenswrapper[4803]: I0127 22:12:09.868973 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2568e2db-68d1-49fc-a0fd-363e983d8b97","Type":"ContainerDied","Data":"9a61044a78af284d7e7f3fe7776badd0c8ff2f8c2516d15226ffc4eaa2c4ec1b"} Jan 27 22:12:10 crc kubenswrapper[4803]: I0127 22:12:10.881911 4803 generic.go:334] "Generic (PLEG): container finished" podID="8215d5aa-a30a-4a03-8058-509b5d04b261" containerID="358f275c56eb86e806cfe67db4dc7828a1452e7f59367d2056a388ab1dbad289" exitCode=0 Jan 27 22:12:10 crc kubenswrapper[4803]: I0127 22:12:10.881963 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8215d5aa-a30a-4a03-8058-509b5d04b261","Type":"ContainerDied","Data":"358f275c56eb86e806cfe67db4dc7828a1452e7f59367d2056a388ab1dbad289"} Jan 27 22:12:12 crc kubenswrapper[4803]: I0127 22:12:12.917773 4803 generic.go:334] "Generic (PLEG): container finished" podID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerID="d32abfec1dd9f35d0b6cd9609c2c5f37e4690190ccce6e6a5a29363e6fdaa8eb" exitCode=0 Jan 27 22:12:12 crc kubenswrapper[4803]: I0127 22:12:12.918095 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e408c3-f14c-4cad-a5b5-24d601fcb8d8","Type":"ContainerDied","Data":"d32abfec1dd9f35d0b6cd9609c2c5f37e4690190ccce6e6a5a29363e6fdaa8eb"} Jan 27 22:12:13 crc kubenswrapper[4803]: I0127 22:12:13.970191 4803 generic.go:334] "Generic (PLEG): container finished" podID="2568e2db-68d1-49fc-a0fd-363e983d8b97" containerID="9314ccf5202326dc651edcf0da21dcead4e773b9b60ade52d9912a9d7c50270c" exitCode=0 Jan 27 22:12:13 crc kubenswrapper[4803]: I0127 22:12:13.970408 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2568e2db-68d1-49fc-a0fd-363e983d8b97","Type":"ContainerDied","Data":"9314ccf5202326dc651edcf0da21dcead4e773b9b60ade52d9912a9d7c50270c"} Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.194230 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.252588 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-config-data\") pod \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.252657 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-run-httpd\") pod \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.252686 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xbwb\" (UniqueName: \"kubernetes.io/projected/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-kube-api-access-6xbwb\") pod \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.252833 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-sg-core-conf-yaml\") pod \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.252957 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-log-httpd\") pod \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.253002 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-combined-ca-bundle\") pod \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.253144 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-scripts\") pod \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\" (UID: \"17e408c3-f14c-4cad-a5b5-24d601fcb8d8\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.255392 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "17e408c3-f14c-4cad-a5b5-24d601fcb8d8" (UID: "17e408c3-f14c-4cad-a5b5-24d601fcb8d8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.255717 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "17e408c3-f14c-4cad-a5b5-24d601fcb8d8" (UID: "17e408c3-f14c-4cad-a5b5-24d601fcb8d8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.265473 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-kube-api-access-6xbwb" (OuterVolumeSpecName: "kube-api-access-6xbwb") pod "17e408c3-f14c-4cad-a5b5-24d601fcb8d8" (UID: "17e408c3-f14c-4cad-a5b5-24d601fcb8d8"). InnerVolumeSpecName "kube-api-access-6xbwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.269873 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-scripts" (OuterVolumeSpecName: "scripts") pod "17e408c3-f14c-4cad-a5b5-24d601fcb8d8" (UID: "17e408c3-f14c-4cad-a5b5-24d601fcb8d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.316685 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "17e408c3-f14c-4cad-a5b5-24d601fcb8d8" (UID: "17e408c3-f14c-4cad-a5b5-24d601fcb8d8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.357497 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.357524 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.357534 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xbwb\" (UniqueName: \"kubernetes.io/projected/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-kube-api-access-6xbwb\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.357548 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.357556 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.397165 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17e408c3-f14c-4cad-a5b5-24d601fcb8d8" (UID: "17e408c3-f14c-4cad-a5b5-24d601fcb8d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.460137 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-config-data" (OuterVolumeSpecName: "config-data") pod "17e408c3-f14c-4cad-a5b5-24d601fcb8d8" (UID: "17e408c3-f14c-4cad-a5b5-24d601fcb8d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.462365 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.462390 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e408c3-f14c-4cad-a5b5-24d601fcb8d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.579898 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.767971 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-httpd-run\") pod \"8215d5aa-a30a-4a03-8058-509b5d04b261\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.768041 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-combined-ca-bundle\") pod \"8215d5aa-a30a-4a03-8058-509b5d04b261\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.768131 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-scripts\") pod \"8215d5aa-a30a-4a03-8058-509b5d04b261\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.768589 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8215d5aa-a30a-4a03-8058-509b5d04b261" (UID: "8215d5aa-a30a-4a03-8058-509b5d04b261"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.768725 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"8215d5aa-a30a-4a03-8058-509b5d04b261\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.768756 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdndz\" (UniqueName: \"kubernetes.io/projected/8215d5aa-a30a-4a03-8058-509b5d04b261-kube-api-access-zdndz\") pod \"8215d5aa-a30a-4a03-8058-509b5d04b261\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.768901 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-logs\") pod \"8215d5aa-a30a-4a03-8058-509b5d04b261\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.768926 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-config-data\") pod \"8215d5aa-a30a-4a03-8058-509b5d04b261\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.768994 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-internal-tls-certs\") pod \"8215d5aa-a30a-4a03-8058-509b5d04b261\" (UID: \"8215d5aa-a30a-4a03-8058-509b5d04b261\") " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.769513 4803 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.772632 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-logs" (OuterVolumeSpecName: "logs") pod "8215d5aa-a30a-4a03-8058-509b5d04b261" (UID: "8215d5aa-a30a-4a03-8058-509b5d04b261"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.792950 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-scripts" (OuterVolumeSpecName: "scripts") pod "8215d5aa-a30a-4a03-8058-509b5d04b261" (UID: "8215d5aa-a30a-4a03-8058-509b5d04b261"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.800170 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8215d5aa-a30a-4a03-8058-509b5d04b261-kube-api-access-zdndz" (OuterVolumeSpecName: "kube-api-access-zdndz") pod "8215d5aa-a30a-4a03-8058-509b5d04b261" (UID: "8215d5aa-a30a-4a03-8058-509b5d04b261"). InnerVolumeSpecName "kube-api-access-zdndz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.804777 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48" (OuterVolumeSpecName: "glance") pod "8215d5aa-a30a-4a03-8058-509b5d04b261" (UID: "8215d5aa-a30a-4a03-8058-509b5d04b261"). InnerVolumeSpecName "pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.816502 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8215d5aa-a30a-4a03-8058-509b5d04b261" (UID: "8215d5aa-a30a-4a03-8058-509b5d04b261"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.850724 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8215d5aa-a30a-4a03-8058-509b5d04b261" (UID: "8215d5aa-a30a-4a03-8058-509b5d04b261"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.871611 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8215d5aa-a30a-4a03-8058-509b5d04b261-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.871644 4803 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.871656 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.871665 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.871704 4803 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") on node \"crc\" " Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.871714 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdndz\" (UniqueName: \"kubernetes.io/projected/8215d5aa-a30a-4a03-8058-509b5d04b261-kube-api-access-zdndz\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.886729 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-config-data" (OuterVolumeSpecName: "config-data") pod "8215d5aa-a30a-4a03-8058-509b5d04b261" (UID: "8215d5aa-a30a-4a03-8058-509b5d04b261"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.911402 4803 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.911602 4803 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48") on node "crc" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.974642 4803 reconciler_common.go:293] "Volume detached for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:14 crc kubenswrapper[4803]: I0127 22:12:14.975584 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8215d5aa-a30a-4a03-8058-509b5d04b261-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.006923 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8215d5aa-a30a-4a03-8058-509b5d04b261","Type":"ContainerDied","Data":"e590bb33bb02e4b77055ab045232498ccfb752f65bcdd540b855a418438a6cee"} Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.006982 4803 scope.go:117] "RemoveContainer" containerID="358f275c56eb86e806cfe67db4dc7828a1452e7f59367d2056a388ab1dbad289" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.007026 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.022617 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"70c0e109-5a8c-4c70-87a6-bc31ed1a001d","Type":"ContainerStarted","Data":"7ff7f28ce8f141b169902cf2a8576faf3ef4817881a4a1bcf60949f18be5bc75"} Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.061771 4803 scope.go:117] "RemoveContainer" containerID="ef4cf056b8f9b84ecba3e9ad2a548f23e4339dc78dcb5d24c92c6a7502b9af85" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.061893 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-n86x7"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.066303 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e408c3-f14c-4cad-a5b5-24d601fcb8d8","Type":"ContainerDied","Data":"fa450a52a32a5c6b8a3fa7591fb01464b7cc4bab478e16e2e58238c9f70b3cd8"} Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.066383 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.096043 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-29d72"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.100246 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.504364631 podStartE2EDuration="13.100225659s" podCreationTimestamp="2026-01-27 22:12:02 +0000 UTC" firstStartedPulling="2026-01-27 22:12:03.310791197 +0000 UTC m=+1475.726812896" lastFinishedPulling="2026-01-27 22:12:13.906652225 +0000 UTC m=+1486.322673924" observedRunningTime="2026-01-27 22:12:15.051456084 +0000 UTC m=+1487.467477783" watchObservedRunningTime="2026-01-27 22:12:15.100225659 +0000 UTC m=+1487.516247358" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.160958 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ffj48"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.198280 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:12:15 crc kubenswrapper[4803]: W0127 22:12:15.203416 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda043d332_9921_4219_9ad6_12e0cb2e31b9.slice/crio-b032fbc405fb7abb30dce1cf026885960a2a7e575ebe319301fff1e9dfa9c942 WatchSource:0}: Error finding container b032fbc405fb7abb30dce1cf026885960a2a7e575ebe319301fff1e9dfa9c942: Status 404 returned error can't find the container with id b032fbc405fb7abb30dce1cf026885960a2a7e575ebe319301fff1e9dfa9c942 Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.205365 4803 scope.go:117] "RemoveContainer" containerID="d5cc94aeb87b89702edc081e30aa41d555d46927fd7222bac060e1b766b8e01e" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.256675 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9b97-account-create-update-5fk5w"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.283439 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ffb8-account-create-update-wvcqj"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.305404 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:12:15 crc kubenswrapper[4803]: W0127 22:12:15.307247 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26f931c2_83c8_4d1a_88ff_4483d4aba42d.slice/crio-8170bb386da80c8c850e8980caac270491288179eb1c22194e54916f6e9c0bad WatchSource:0}: Error finding container 8170bb386da80c8c850e8980caac270491288179eb1c22194e54916f6e9c0bad: Status 404 returned error can't find the container with id 8170bb386da80c8c850e8980caac270491288179eb1c22194e54916f6e9c0bad Jan 27 22:12:15 crc kubenswrapper[4803]: W0127 22:12:15.308595 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod912aaad5_2b5b_431b_821f_0ba813a0faaf.slice/crio-74044aae4d31ac917dc1d7ea5223835094f7bb56d14a2bed1dd8f92466032ac1 WatchSource:0}: Error finding container 74044aae4d31ac917dc1d7ea5223835094f7bb56d14a2bed1dd8f92466032ac1: Status 404 returned error can't find the container with id 74044aae4d31ac917dc1d7ea5223835094f7bb56d14a2bed1dd8f92466032ac1 Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.317368 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:12:15 crc kubenswrapper[4803]: E0127 22:12:15.317891 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="sg-core" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.317903 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="sg-core" Jan 27 22:12:15 crc kubenswrapper[4803]: E0127 22:12:15.317926 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="proxy-httpd" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.317932 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="proxy-httpd" Jan 27 22:12:15 crc kubenswrapper[4803]: E0127 22:12:15.317950 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8215d5aa-a30a-4a03-8058-509b5d04b261" containerName="glance-log" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.317956 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8215d5aa-a30a-4a03-8058-509b5d04b261" containerName="glance-log" Jan 27 22:12:15 crc kubenswrapper[4803]: E0127 22:12:15.317963 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="ceilometer-central-agent" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.317972 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="ceilometer-central-agent" Jan 27 22:12:15 crc kubenswrapper[4803]: E0127 22:12:15.317983 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="ceilometer-notification-agent" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.317989 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="ceilometer-notification-agent" Jan 27 22:12:15 crc kubenswrapper[4803]: E0127 22:12:15.318004 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8215d5aa-a30a-4a03-8058-509b5d04b261" containerName="glance-httpd" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.318012 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8215d5aa-a30a-4a03-8058-509b5d04b261" containerName="glance-httpd" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.318241 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="ceilometer-central-agent" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.318259 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="ceilometer-notification-agent" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.318273 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8215d5aa-a30a-4a03-8058-509b5d04b261" containerName="glance-httpd" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.318288 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="proxy-httpd" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.318296 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" containerName="sg-core" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.318309 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8215d5aa-a30a-4a03-8058-509b5d04b261" containerName="glance-log" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.319485 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.321816 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.322091 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.355872 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-821d-account-create-update-6bmpn"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.377523 4803 scope.go:117] "RemoveContainer" containerID="323088ee81b8caa75c358286d925cf084a7b7daeb9a79f5acb193a7351343998" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.380768 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.403372 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-54c764888c-dpmfw"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.437774 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.448926 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.450299 4803 scope.go:117] "RemoveContainer" containerID="d32abfec1dd9f35d0b6cd9609c2c5f37e4690190ccce6e6a5a29363e6fdaa8eb" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.470483 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.474616 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.482071 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.482407 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.482286 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.483668 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.504413 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.504458 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.504522 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.504558 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.504578 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-logs\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.504652 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.504787 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.504828 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpc9s\" (UniqueName: \"kubernetes.io/projected/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-kube-api-access-xpc9s\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.561033 4803 scope.go:117] "RemoveContainer" containerID="6e1ccfd92241094c36c6597a9ca17f2e07201b2ffbb909421e10fcfd8b58d09f" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.607778 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-config-data\") pod \"2568e2db-68d1-49fc-a0fd-363e983d8b97\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.607894 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-combined-ca-bundle\") pod \"2568e2db-68d1-49fc-a0fd-363e983d8b97\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.611962 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"2568e2db-68d1-49fc-a0fd-363e983d8b97\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.612013 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-public-tls-certs\") pod \"2568e2db-68d1-49fc-a0fd-363e983d8b97\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.612864 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-scripts\") pod \"2568e2db-68d1-49fc-a0fd-363e983d8b97\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.612939 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-logs\") pod \"2568e2db-68d1-49fc-a0fd-363e983d8b97\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.612960 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-httpd-run\") pod \"2568e2db-68d1-49fc-a0fd-363e983d8b97\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613062 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8792v\" (UniqueName: \"kubernetes.io/projected/2568e2db-68d1-49fc-a0fd-363e983d8b97-kube-api-access-8792v\") pod \"2568e2db-68d1-49fc-a0fd-363e983d8b97\" (UID: \"2568e2db-68d1-49fc-a0fd-363e983d8b97\") " Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613224 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-log-httpd\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613264 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613289 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-logs\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613361 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-config-data\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613405 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613428 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613517 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsnc2\" (UniqueName: \"kubernetes.io/projected/eb8d5f51-354d-4590-a83b-489e614f0c25-kube-api-access-lsnc2\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613585 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-scripts\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613623 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613650 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpc9s\" (UniqueName: \"kubernetes.io/projected/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-kube-api-access-xpc9s\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613680 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613708 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613765 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-run-httpd\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613803 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.613827 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.621070 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2568e2db-68d1-49fc-a0fd-363e983d8b97" (UID: "2568e2db-68d1-49fc-a0fd-363e983d8b97"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.621338 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-logs\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.621667 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.629140 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.629295 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.629473 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.629483 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aca46170828d627b2eca91669573c71b0777b4758f25e31ae46ddbca6c8ecc63/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.629332 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.633716 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-logs" (OuterVolumeSpecName: "logs") pod "2568e2db-68d1-49fc-a0fd-363e983d8b97" (UID: "2568e2db-68d1-49fc-a0fd-363e983d8b97"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.633856 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.644597 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpc9s\" (UniqueName: \"kubernetes.io/projected/9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67-kube-api-access-xpc9s\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.654498 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2568e2db-68d1-49fc-a0fd-363e983d8b97-kube-api-access-8792v" (OuterVolumeSpecName: "kube-api-access-8792v") pod "2568e2db-68d1-49fc-a0fd-363e983d8b97" (UID: "2568e2db-68d1-49fc-a0fd-363e983d8b97"). InnerVolumeSpecName "kube-api-access-8792v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.676237 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-scripts" (OuterVolumeSpecName: "scripts") pod "2568e2db-68d1-49fc-a0fd-363e983d8b97" (UID: "2568e2db-68d1-49fc-a0fd-363e983d8b97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.708435 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6" (OuterVolumeSpecName: "glance") pod "2568e2db-68d1-49fc-a0fd-363e983d8b97" (UID: "2568e2db-68d1-49fc-a0fd-363e983d8b97"). InnerVolumeSpecName "pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.717907 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-run-httpd\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.718311 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-run-httpd\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.718384 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.719391 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-log-httpd\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.719539 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-config-data\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.719610 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.719729 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsnc2\" (UniqueName: \"kubernetes.io/projected/eb8d5f51-354d-4590-a83b-489e614f0c25-kube-api-access-lsnc2\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.719822 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-scripts\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.720367 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.720391 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.720404 4803 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2568e2db-68d1-49fc-a0fd-363e983d8b97-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.720417 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8792v\" (UniqueName: \"kubernetes.io/projected/2568e2db-68d1-49fc-a0fd-363e983d8b97-kube-api-access-8792v\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.720494 4803 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") on node \"crc\" " Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.720689 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-log-httpd\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.726502 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.728562 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.742656 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-scripts\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.747936 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-config-data\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.748754 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsnc2\" (UniqueName: \"kubernetes.io/projected/eb8d5f51-354d-4590-a83b-489e614f0c25-kube-api-access-lsnc2\") pod \"ceilometer-0\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.783193 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2568e2db-68d1-49fc-a0fd-363e983d8b97" (UID: "2568e2db-68d1-49fc-a0fd-363e983d8b97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.797044 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ae0acb5-077b-4f1f-9f5c-4c9e1d759f48\") pod \"glance-default-internal-api-0\" (UID: \"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67\") " pod="openstack/glance-default-internal-api-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.822888 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.832711 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.833221 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-config-data" (OuterVolumeSpecName: "config-data") pod "2568e2db-68d1-49fc-a0fd-363e983d8b97" (UID: "2568e2db-68d1-49fc-a0fd-363e983d8b97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.873838 4803 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.873988 4803 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6") on node "crc" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.880756 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2568e2db-68d1-49fc-a0fd-363e983d8b97" (UID: "2568e2db-68d1-49fc-a0fd-363e983d8b97"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.925147 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.925416 4803 reconciler_common.go:293] "Volume detached for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.925432 4803 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2568e2db-68d1-49fc-a0fd-363e983d8b97-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:15 crc kubenswrapper[4803]: I0127 22:12:15.981544 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.084256 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-29d72" event={"ID":"4c5ddc4c-65f5-4b87-b30c-6c63031f8826","Type":"ContainerStarted","Data":"35983da9127dee21f98c498d2cce3484b9e61f2da45d93e789b7f734b4df0349"} Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.087887 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" event={"ID":"7e4f1dd8-79ee-4832-9474-cabab5bc72e8","Type":"ContainerStarted","Data":"b8d5994cde64b2c3395cc95a6c25f09bf59dd32cb606a0a1063b53bf6c3711ca"} Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.090001 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2568e2db-68d1-49fc-a0fd-363e983d8b97","Type":"ContainerDied","Data":"3fa3b196fce23bb819351d343db44a9cd1c296a8c0ba4756f257a21774339217"} Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.090051 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.090057 4803 scope.go:117] "RemoveContainer" containerID="9314ccf5202326dc651edcf0da21dcead4e773b9b60ade52d9912a9d7c50270c" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.097183 4803 generic.go:334] "Generic (PLEG): container finished" podID="4be52911-e65b-41f4-b207-efc49bc308d9" containerID="3764bf033604f05d05e795ba89541aa1c4a3e0511424a0ed2f0011a122658ee1" exitCode=0 Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.097253 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-821d-account-create-update-6bmpn" event={"ID":"4be52911-e65b-41f4-b207-efc49bc308d9","Type":"ContainerDied","Data":"3764bf033604f05d05e795ba89541aa1c4a3e0511424a0ed2f0011a122658ee1"} Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.097280 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-821d-account-create-update-6bmpn" event={"ID":"4be52911-e65b-41f4-b207-efc49bc308d9","Type":"ContainerStarted","Data":"ce20e5ce7a2749ecaf792c1c40b67a43baab164422bfe625401080dbe67a9e5e"} Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.101243 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ffj48" event={"ID":"488bf67e-5edf-45f8-8ac9-a12e75646525","Type":"ContainerStarted","Data":"ecc1dbd89a86001c6642d51f5c0a7017500ef4b5e4ce55632155321e1f142f9e"} Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.102616 4803 generic.go:334] "Generic (PLEG): container finished" podID="a043d332-9921-4219-9ad6-12e0cb2e31b9" containerID="4cd409d15e23e7d7bf89c7bbb9726050a0cb13b99b70202d067dd35e9ca78630" exitCode=0 Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.102658 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n86x7" event={"ID":"a043d332-9921-4219-9ad6-12e0cb2e31b9","Type":"ContainerDied","Data":"4cd409d15e23e7d7bf89c7bbb9726050a0cb13b99b70202d067dd35e9ca78630"} Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.102697 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n86x7" event={"ID":"a043d332-9921-4219-9ad6-12e0cb2e31b9","Type":"ContainerStarted","Data":"b032fbc405fb7abb30dce1cf026885960a2a7e575ebe319301fff1e9dfa9c942"} Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.103612 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54c764888c-dpmfw" event={"ID":"912aaad5-2b5b-431b-821f-0ba813a0faaf","Type":"ContainerStarted","Data":"74044aae4d31ac917dc1d7ea5223835094f7bb56d14a2bed1dd8f92466032ac1"} Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.105403 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9b97-account-create-update-5fk5w" event={"ID":"26f931c2-83c8-4d1a-88ff-4483d4aba42d","Type":"ContainerStarted","Data":"8170bb386da80c8c850e8980caac270491288179eb1c22194e54916f6e9c0bad"} Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.192981 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.224356 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.234739 4803 scope.go:117] "RemoveContainer" containerID="9a61044a78af284d7e7f3fe7776badd0c8ff2f8c2516d15226ffc4eaa2c4ec1b" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.261956 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:12:16 crc kubenswrapper[4803]: E0127 22:12:16.262580 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2568e2db-68d1-49fc-a0fd-363e983d8b97" containerName="glance-httpd" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.262593 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="2568e2db-68d1-49fc-a0fd-363e983d8b97" containerName="glance-httpd" Jan 27 22:12:16 crc kubenswrapper[4803]: E0127 22:12:16.263405 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2568e2db-68d1-49fc-a0fd-363e983d8b97" containerName="glance-log" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.263439 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="2568e2db-68d1-49fc-a0fd-363e983d8b97" containerName="glance-log" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.267353 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="2568e2db-68d1-49fc-a0fd-363e983d8b97" containerName="glance-httpd" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.267402 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="2568e2db-68d1-49fc-a0fd-363e983d8b97" containerName="glance-log" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.269736 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.272830 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.273058 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.328328 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e408c3-f14c-4cad-a5b5-24d601fcb8d8" path="/var/lib/kubelet/pods/17e408c3-f14c-4cad-a5b5-24d601fcb8d8/volumes" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.344827 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.344915 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.345441 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2568e2db-68d1-49fc-a0fd-363e983d8b97" path="/var/lib/kubelet/pods/2568e2db-68d1-49fc-a0fd-363e983d8b97/volumes" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.360708 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8215d5aa-a30a-4a03-8058-509b5d04b261" path="/var/lib/kubelet/pods/8215d5aa-a30a-4a03-8058-509b5d04b261/volumes" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.363301 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.413989 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.452484 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.452566 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f515d455-caa5-4c15-a824-f9dd3d46d1b7-logs\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.452588 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.452626 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f515d455-caa5-4c15-a824-f9dd3d46d1b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.452642 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.452682 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.452707 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnhwn\" (UniqueName: \"kubernetes.io/projected/f515d455-caa5-4c15-a824-f9dd3d46d1b7-kube-api-access-vnhwn\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.452774 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.555221 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.555626 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f515d455-caa5-4c15-a824-f9dd3d46d1b7-logs\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.555652 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.555692 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f515d455-caa5-4c15-a824-f9dd3d46d1b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.555719 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.555767 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.555789 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnhwn\" (UniqueName: \"kubernetes.io/projected/f515d455-caa5-4c15-a824-f9dd3d46d1b7-kube-api-access-vnhwn\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.555882 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.556059 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f515d455-caa5-4c15-a824-f9dd3d46d1b7-logs\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.557980 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f515d455-caa5-4c15-a824-f9dd3d46d1b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.561091 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.563413 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.565577 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.569178 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.569229 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/82afd0a610ddd892574d89cd2a35286bd9ea734e30ae6ef371122711a69797f9/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.569303 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f515d455-caa5-4c15-a824-f9dd3d46d1b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.579983 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnhwn\" (UniqueName: \"kubernetes.io/projected/f515d455-caa5-4c15-a824-f9dd3d46d1b7-kube-api-access-vnhwn\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.625469 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fc8e8ce-c152-41ad-86e9-d0ea47b68ea6\") pod \"glance-default-external-api-0\" (UID: \"f515d455-caa5-4c15-a824-f9dd3d46d1b7\") " pod="openstack/glance-default-external-api-0" Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.778721 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 22:12:16 crc kubenswrapper[4803]: I0127 22:12:16.927140 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.134009 4803 generic.go:334] "Generic (PLEG): container finished" podID="26f931c2-83c8-4d1a-88ff-4483d4aba42d" containerID="10a25cab970b58592891ef09277c813d9a9b8ecdf4c787ce9427938b7a8ff554" exitCode=0 Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.134338 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9b97-account-create-update-5fk5w" event={"ID":"26f931c2-83c8-4d1a-88ff-4483d4aba42d","Type":"ContainerDied","Data":"10a25cab970b58592891ef09277c813d9a9b8ecdf4c787ce9427938b7a8ff554"} Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.138782 4803 generic.go:334] "Generic (PLEG): container finished" podID="488bf67e-5edf-45f8-8ac9-a12e75646525" containerID="955ce66cce97d49691696950c099fea1511cd1d7cff02e7af258673e8b90eccc" exitCode=0 Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.138888 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ffj48" event={"ID":"488bf67e-5edf-45f8-8ac9-a12e75646525","Type":"ContainerDied","Data":"955ce66cce97d49691696950c099fea1511cd1d7cff02e7af258673e8b90eccc"} Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.147387 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67","Type":"ContainerStarted","Data":"4fc1a0b265c60af04a192ac7d9b05da1ed59280a5a7a5ddf82323efe8ec7690b"} Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.147643 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7f556d549c-2bkn4" Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.149830 4803 generic.go:334] "Generic (PLEG): container finished" podID="7e4f1dd8-79ee-4832-9474-cabab5bc72e8" containerID="84c4c07e807f36fd4f9f69d4873a4376b7bb087eb66e27816dc1655b78159a91" exitCode=0 Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.149957 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" event={"ID":"7e4f1dd8-79ee-4832-9474-cabab5bc72e8","Type":"ContainerDied","Data":"84c4c07e807f36fd4f9f69d4873a4376b7bb087eb66e27816dc1655b78159a91"} Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.155482 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54c764888c-dpmfw" event={"ID":"912aaad5-2b5b-431b-821f-0ba813a0faaf","Type":"ContainerStarted","Data":"b47f58374c227827cf9362a7ff5ad949d6547b994c54c971961848916002875e"} Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.155521 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54c764888c-dpmfw" event={"ID":"912aaad5-2b5b-431b-821f-0ba813a0faaf","Type":"ContainerStarted","Data":"30c4d8f00554cf4dabee295aaecfec2370674bbd787d0741fb0785ba9d2b2489"} Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.156385 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.156425 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.157962 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8d5f51-354d-4590-a83b-489e614f0c25","Type":"ContainerStarted","Data":"a8faf57e6ea64b38ce43d8784041f478c5341966e9d25b9895d53af4cdb6bd30"} Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.159149 4803 generic.go:334] "Generic (PLEG): container finished" podID="4c5ddc4c-65f5-4b87-b30c-6c63031f8826" containerID="0f839680e2da0b2cd924d03df01469cc0586bc4689e38d9940096323678432d5" exitCode=0 Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.159236 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-29d72" event={"ID":"4c5ddc4c-65f5-4b87-b30c-6c63031f8826","Type":"ContainerDied","Data":"0f839680e2da0b2cd924d03df01469cc0586bc4689e38d9940096323678432d5"} Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.276595 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-69fc44b874-lbwd9"] Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.276815 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-69fc44b874-lbwd9" podUID="e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" containerName="neutron-api" containerID="cri-o://724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709" gracePeriod=30 Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.277258 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-69fc44b874-lbwd9" podUID="e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" containerName="neutron-httpd" containerID="cri-o://d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886" gracePeriod=30 Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.312350 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-54c764888c-dpmfw" podStartSLOduration=10.312331149 podStartE2EDuration="10.312331149s" podCreationTimestamp="2026-01-27 22:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:12:17.276572915 +0000 UTC m=+1489.692594604" watchObservedRunningTime="2026-01-27 22:12:17.312331149 +0000 UTC m=+1489.728352838" Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.592465 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jqw45" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="registry-server" probeResult="failure" output=< Jan 27 22:12:17 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:12:17 crc kubenswrapper[4803]: > Jan 27 22:12:17 crc kubenswrapper[4803]: I0127 22:12:17.915510 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.071377 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-821d-account-create-update-6bmpn" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.119141 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n86x7" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.172893 4803 generic.go:334] "Generic (PLEG): container finished" podID="e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" containerID="d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886" exitCode=0 Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.173060 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fc44b874-lbwd9" event={"ID":"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea","Type":"ContainerDied","Data":"d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886"} Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.192616 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n86x7" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.192695 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n86x7" event={"ID":"a043d332-9921-4219-9ad6-12e0cb2e31b9","Type":"ContainerDied","Data":"b032fbc405fb7abb30dce1cf026885960a2a7e575ebe319301fff1e9dfa9c942"} Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.192734 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b032fbc405fb7abb30dce1cf026885960a2a7e575ebe319301fff1e9dfa9c942" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.198027 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k7mw\" (UniqueName: \"kubernetes.io/projected/a043d332-9921-4219-9ad6-12e0cb2e31b9-kube-api-access-2k7mw\") pod \"a043d332-9921-4219-9ad6-12e0cb2e31b9\" (UID: \"a043d332-9921-4219-9ad6-12e0cb2e31b9\") " Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.198152 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7srv\" (UniqueName: \"kubernetes.io/projected/4be52911-e65b-41f4-b207-efc49bc308d9-kube-api-access-f7srv\") pod \"4be52911-e65b-41f4-b207-efc49bc308d9\" (UID: \"4be52911-e65b-41f4-b207-efc49bc308d9\") " Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.198310 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a043d332-9921-4219-9ad6-12e0cb2e31b9-operator-scripts\") pod \"a043d332-9921-4219-9ad6-12e0cb2e31b9\" (UID: \"a043d332-9921-4219-9ad6-12e0cb2e31b9\") " Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.198351 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be52911-e65b-41f4-b207-efc49bc308d9-operator-scripts\") pod \"4be52911-e65b-41f4-b207-efc49bc308d9\" (UID: \"4be52911-e65b-41f4-b207-efc49bc308d9\") " Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.199366 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be52911-e65b-41f4-b207-efc49bc308d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4be52911-e65b-41f4-b207-efc49bc308d9" (UID: "4be52911-e65b-41f4-b207-efc49bc308d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.199714 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a043d332-9921-4219-9ad6-12e0cb2e31b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a043d332-9921-4219-9ad6-12e0cb2e31b9" (UID: "a043d332-9921-4219-9ad6-12e0cb2e31b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.203252 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a043d332-9921-4219-9ad6-12e0cb2e31b9-kube-api-access-2k7mw" (OuterVolumeSpecName: "kube-api-access-2k7mw") pod "a043d332-9921-4219-9ad6-12e0cb2e31b9" (UID: "a043d332-9921-4219-9ad6-12e0cb2e31b9"). InnerVolumeSpecName "kube-api-access-2k7mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.206033 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be52911-e65b-41f4-b207-efc49bc308d9-kube-api-access-f7srv" (OuterVolumeSpecName: "kube-api-access-f7srv") pod "4be52911-e65b-41f4-b207-efc49bc308d9" (UID: "4be52911-e65b-41f4-b207-efc49bc308d9"). InnerVolumeSpecName "kube-api-access-f7srv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.209261 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8d5f51-354d-4590-a83b-489e614f0c25","Type":"ContainerStarted","Data":"80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc"} Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.214995 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f515d455-caa5-4c15-a824-f9dd3d46d1b7","Type":"ContainerStarted","Data":"5a66c5c62cd8f0d76f2401d069df1fe09db7a1e455a61bbe2214dcff06923597"} Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.217901 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-821d-account-create-update-6bmpn" event={"ID":"4be52911-e65b-41f4-b207-efc49bc308d9","Type":"ContainerDied","Data":"ce20e5ce7a2749ecaf792c1c40b67a43baab164422bfe625401080dbe67a9e5e"} Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.217926 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce20e5ce7a2749ecaf792c1c40b67a43baab164422bfe625401080dbe67a9e5e" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.217978 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-821d-account-create-update-6bmpn" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.229403 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67","Type":"ContainerStarted","Data":"cdbe74230917c4ed09b61464acd63b09ef1f26e2aab449560660f0058be39e9c"} Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.305027 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k7mw\" (UniqueName: \"kubernetes.io/projected/a043d332-9921-4219-9ad6-12e0cb2e31b9-kube-api-access-2k7mw\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.305060 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7srv\" (UniqueName: \"kubernetes.io/projected/4be52911-e65b-41f4-b207-efc49bc308d9-kube-api-access-f7srv\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.305073 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a043d332-9921-4219-9ad6-12e0cb2e31b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.305105 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be52911-e65b-41f4-b207-efc49bc308d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.485805 4803 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod53f3ea29-9273-4b38-8f97-0821042ab7fc"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod53f3ea29-9273-4b38-8f97-0821042ab7fc] : Timed out while waiting for systemd to remove kubepods-besteffort-pod53f3ea29_9273_4b38_8f97_0821042ab7fc.slice" Jan 27 22:12:18 crc kubenswrapper[4803]: E0127 22:12:18.630611 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda043d332_9921_4219_9ad6_12e0cb2e31b9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4be52911_e65b_41f4_b207_efc49bc308d9.slice/crio-ce20e5ce7a2749ecaf792c1c40b67a43baab164422bfe625401080dbe67a9e5e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4be52911_e65b_41f4_b207_efc49bc308d9.slice\": RecentStats: unable to find data in memory cache]" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.835288 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-29d72" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.939669 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlzq9\" (UniqueName: \"kubernetes.io/projected/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-kube-api-access-zlzq9\") pod \"4c5ddc4c-65f5-4b87-b30c-6c63031f8826\" (UID: \"4c5ddc4c-65f5-4b87-b30c-6c63031f8826\") " Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.939986 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-operator-scripts\") pod \"4c5ddc4c-65f5-4b87-b30c-6c63031f8826\" (UID: \"4c5ddc4c-65f5-4b87-b30c-6c63031f8826\") " Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.941457 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c5ddc4c-65f5-4b87-b30c-6c63031f8826" (UID: "4c5ddc4c-65f5-4b87-b30c-6c63031f8826"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:12:18 crc kubenswrapper[4803]: I0127 22:12:18.955737 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-kube-api-access-zlzq9" (OuterVolumeSpecName: "kube-api-access-zlzq9") pod "4c5ddc4c-65f5-4b87-b30c-6c63031f8826" (UID: "4c5ddc4c-65f5-4b87-b30c-6c63031f8826"). InnerVolumeSpecName "kube-api-access-zlzq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.042545 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlzq9\" (UniqueName: \"kubernetes.io/projected/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-kube-api-access-zlzq9\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.042580 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5ddc4c-65f5-4b87-b30c-6c63031f8826-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.285324 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9b97-account-create-update-5fk5w" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.292815 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8d5f51-354d-4590-a83b-489e614f0c25","Type":"ContainerStarted","Data":"79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85"} Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.308457 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-29d72" event={"ID":"4c5ddc4c-65f5-4b87-b30c-6c63031f8826","Type":"ContainerDied","Data":"35983da9127dee21f98c498d2cce3484b9e61f2da45d93e789b7f734b4df0349"} Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.308492 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35983da9127dee21f98c498d2cce3484b9e61f2da45d93e789b7f734b4df0349" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.308542 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-29d72" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.308624 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.321103 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ffj48" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.349597 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9b97-account-create-update-5fk5w" event={"ID":"26f931c2-83c8-4d1a-88ff-4483d4aba42d","Type":"ContainerDied","Data":"8170bb386da80c8c850e8980caac270491288179eb1c22194e54916f6e9c0bad"} Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.349634 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8170bb386da80c8c850e8980caac270491288179eb1c22194e54916f6e9c0bad" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.349704 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9b97-account-create-update-5fk5w" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.355971 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.382154 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ffj48" event={"ID":"488bf67e-5edf-45f8-8ac9-a12e75646525","Type":"ContainerDied","Data":"ecc1dbd89a86001c6642d51f5c0a7017500ef4b5e4ce55632155321e1f142f9e"} Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.382197 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecc1dbd89a86001c6642d51f5c0a7017500ef4b5e4ce55632155321e1f142f9e" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.382247 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ffj48" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.407346 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67","Type":"ContainerStarted","Data":"2975091bf30002126966198de58b65b17d05f6626b858ef4e667794fedc5649e"} Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.438673 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.439379 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ffb8-account-create-update-wvcqj" event={"ID":"7e4f1dd8-79ee-4832-9474-cabab5bc72e8","Type":"ContainerDied","Data":"b8d5994cde64b2c3395cc95a6c25f09bf59dd32cb606a0a1063b53bf6c3711ca"} Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.439423 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8d5994cde64b2c3395cc95a6c25f09bf59dd32cb606a0a1063b53bf6c3711ca" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.459156 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f931c2-83c8-4d1a-88ff-4483d4aba42d-operator-scripts\") pod \"26f931c2-83c8-4d1a-88ff-4483d4aba42d\" (UID: \"26f931c2-83c8-4d1a-88ff-4483d4aba42d\") " Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.459207 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488bf67e-5edf-45f8-8ac9-a12e75646525-operator-scripts\") pod \"488bf67e-5edf-45f8-8ac9-a12e75646525\" (UID: \"488bf67e-5edf-45f8-8ac9-a12e75646525\") " Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.459239 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szngf\" (UniqueName: \"kubernetes.io/projected/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-kube-api-access-szngf\") pod \"7e4f1dd8-79ee-4832-9474-cabab5bc72e8\" (UID: \"7e4f1dd8-79ee-4832-9474-cabab5bc72e8\") " Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.459355 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-operator-scripts\") pod \"7e4f1dd8-79ee-4832-9474-cabab5bc72e8\" (UID: \"7e4f1dd8-79ee-4832-9474-cabab5bc72e8\") " Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.459403 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt26g\" (UniqueName: \"kubernetes.io/projected/26f931c2-83c8-4d1a-88ff-4483d4aba42d-kube-api-access-qt26g\") pod \"26f931c2-83c8-4d1a-88ff-4483d4aba42d\" (UID: \"26f931c2-83c8-4d1a-88ff-4483d4aba42d\") " Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.459524 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bgkt\" (UniqueName: \"kubernetes.io/projected/488bf67e-5edf-45f8-8ac9-a12e75646525-kube-api-access-6bgkt\") pod \"488bf67e-5edf-45f8-8ac9-a12e75646525\" (UID: \"488bf67e-5edf-45f8-8ac9-a12e75646525\") " Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.463512 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e4f1dd8-79ee-4832-9474-cabab5bc72e8" (UID: "7e4f1dd8-79ee-4832-9474-cabab5bc72e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.463896 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26f931c2-83c8-4d1a-88ff-4483d4aba42d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26f931c2-83c8-4d1a-88ff-4483d4aba42d" (UID: "26f931c2-83c8-4d1a-88ff-4483d4aba42d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.466293 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/488bf67e-5edf-45f8-8ac9-a12e75646525-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "488bf67e-5edf-45f8-8ac9-a12e75646525" (UID: "488bf67e-5edf-45f8-8ac9-a12e75646525"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.466736 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f931c2-83c8-4d1a-88ff-4483d4aba42d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.466783 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.475535 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/488bf67e-5edf-45f8-8ac9-a12e75646525-kube-api-access-6bgkt" (OuterVolumeSpecName: "kube-api-access-6bgkt") pod "488bf67e-5edf-45f8-8ac9-a12e75646525" (UID: "488bf67e-5edf-45f8-8ac9-a12e75646525"). InnerVolumeSpecName "kube-api-access-6bgkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.475815 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-kube-api-access-szngf" (OuterVolumeSpecName: "kube-api-access-szngf") pod "7e4f1dd8-79ee-4832-9474-cabab5bc72e8" (UID: "7e4f1dd8-79ee-4832-9474-cabab5bc72e8"). InnerVolumeSpecName "kube-api-access-szngf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.484143 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f931c2-83c8-4d1a-88ff-4483d4aba42d-kube-api-access-qt26g" (OuterVolumeSpecName: "kube-api-access-qt26g") pod "26f931c2-83c8-4d1a-88ff-4483d4aba42d" (UID: "26f931c2-83c8-4d1a-88ff-4483d4aba42d"). InnerVolumeSpecName "kube-api-access-qt26g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.590132 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bgkt\" (UniqueName: \"kubernetes.io/projected/488bf67e-5edf-45f8-8ac9-a12e75646525-kube-api-access-6bgkt\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.590167 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488bf67e-5edf-45f8-8ac9-a12e75646525-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.590177 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szngf\" (UniqueName: \"kubernetes.io/projected/7e4f1dd8-79ee-4832-9474-cabab5bc72e8-kube-api-access-szngf\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.590186 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt26g\" (UniqueName: \"kubernetes.io/projected/26f931c2-83c8-4d1a-88ff-4483d4aba42d-kube-api-access-qt26g\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:19 crc kubenswrapper[4803]: I0127 22:12:19.876587 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.8765696290000005 podStartE2EDuration="4.876569629s" podCreationTimestamp="2026-01-27 22:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:12:19.458417041 +0000 UTC m=+1491.874457691" watchObservedRunningTime="2026-01-27 22:12:19.876569629 +0000 UTC m=+1492.292591328" Jan 27 22:12:20 crc kubenswrapper[4803]: I0127 22:12:20.474126 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8d5f51-354d-4590-a83b-489e614f0c25","Type":"ContainerStarted","Data":"e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164"} Jan 27 22:12:20 crc kubenswrapper[4803]: I0127 22:12:20.480659 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f515d455-caa5-4c15-a824-f9dd3d46d1b7","Type":"ContainerStarted","Data":"65a439cc68de5348955c6e81fc3c6bbe209720f75095323f046f9757927f6c6e"} Jan 27 22:12:20 crc kubenswrapper[4803]: I0127 22:12:20.480696 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f515d455-caa5-4c15-a824-f9dd3d46d1b7","Type":"ContainerStarted","Data":"30c0a5bb6ae65afb050b64296e366a3f0875e36cde66606112c702889491df69"} Jan 27 22:12:20 crc kubenswrapper[4803]: I0127 22:12:20.504331 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.504305005 podStartE2EDuration="4.504305005s" podCreationTimestamp="2026-01-27 22:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:12:20.499831924 +0000 UTC m=+1492.915853633" watchObservedRunningTime="2026-01-27 22:12:20.504305005 +0000 UTC m=+1492.920326704" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.245075 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.334806 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-ovndb-tls-certs\") pod \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.334952 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-config\") pod \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.335029 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-combined-ca-bundle\") pod \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.335067 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-httpd-config\") pod \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.335212 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvzzz\" (UniqueName: \"kubernetes.io/projected/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-kube-api-access-zvzzz\") pod \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\" (UID: \"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea\") " Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.350995 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" (UID: "e1dfb047-0985-4a6f-955d-e5c4a4dff5ea"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.351114 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-kube-api-access-zvzzz" (OuterVolumeSpecName: "kube-api-access-zvzzz") pod "e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" (UID: "e1dfb047-0985-4a6f-955d-e5c4a4dff5ea"). InnerVolumeSpecName "kube-api-access-zvzzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.420045 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" (UID: "e1dfb047-0985-4a6f-955d-e5c4a4dff5ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.433991 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-config" (OuterVolumeSpecName: "config") pod "e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" (UID: "e1dfb047-0985-4a6f-955d-e5c4a4dff5ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.438710 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.438741 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.438754 4803 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.438763 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvzzz\" (UniqueName: \"kubernetes.io/projected/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-kube-api-access-zvzzz\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.444354 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" (UID: "e1dfb047-0985-4a6f-955d-e5c4a4dff5ea"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.493675 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8d5f51-354d-4590-a83b-489e614f0c25","Type":"ContainerStarted","Data":"b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1"} Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.493813 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="proxy-httpd" containerID="cri-o://b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1" gracePeriod=30 Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.493829 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.493793 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="ceilometer-central-agent" containerID="cri-o://80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc" gracePeriod=30 Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.493877 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="ceilometer-notification-agent" containerID="cri-o://79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85" gracePeriod=30 Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.493838 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="sg-core" containerID="cri-o://e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164" gracePeriod=30 Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.499545 4803 generic.go:334] "Generic (PLEG): container finished" podID="e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" containerID="724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709" exitCode=0 Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.499681 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69fc44b874-lbwd9" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.500583 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fc44b874-lbwd9" event={"ID":"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea","Type":"ContainerDied","Data":"724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709"} Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.500616 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fc44b874-lbwd9" event={"ID":"e1dfb047-0985-4a6f-955d-e5c4a4dff5ea","Type":"ContainerDied","Data":"ca56329e39f9bf7ec809956ffea6158e0a250e7879bd232eb1ccfe092ad252e1"} Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.500634 4803 scope.go:117] "RemoveContainer" containerID="d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.534699 4803 scope.go:117] "RemoveContainer" containerID="724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.535687 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.119649712 podStartE2EDuration="6.535661268s" podCreationTimestamp="2026-01-27 22:12:15 +0000 UTC" firstStartedPulling="2026-01-27 22:12:16.38833606 +0000 UTC m=+1488.804357759" lastFinishedPulling="2026-01-27 22:12:20.804347616 +0000 UTC m=+1493.220369315" observedRunningTime="2026-01-27 22:12:21.515704032 +0000 UTC m=+1493.931725751" watchObservedRunningTime="2026-01-27 22:12:21.535661268 +0000 UTC m=+1493.951682967" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.541626 4803 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.551956 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-69fc44b874-lbwd9"] Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.562115 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-69fc44b874-lbwd9"] Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.564700 4803 scope.go:117] "RemoveContainer" containerID="d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886" Jan 27 22:12:21 crc kubenswrapper[4803]: E0127 22:12:21.565234 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886\": container with ID starting with d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886 not found: ID does not exist" containerID="d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.565291 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886"} err="failed to get container status \"d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886\": rpc error: code = NotFound desc = could not find container \"d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886\": container with ID starting with d27c6aa71b9e4a805a84a242ecbd2168040dc48a2bd9ed0484d734edb505b886 not found: ID does not exist" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.565326 4803 scope.go:117] "RemoveContainer" containerID="724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709" Jan 27 22:12:21 crc kubenswrapper[4803]: E0127 22:12:21.565725 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709\": container with ID starting with 724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709 not found: ID does not exist" containerID="724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709" Jan 27 22:12:21 crc kubenswrapper[4803]: I0127 22:12:21.565755 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709"} err="failed to get container status \"724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709\": rpc error: code = NotFound desc = could not find container \"724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709\": container with ID starting with 724792a25ce8f367901b7d90b5e7a221e13d3724bfec3122cb910ca8fc4b1709 not found: ID does not exist" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.320040 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" path="/var/lib/kubelet/pods/e1dfb047-0985-4a6f-955d-e5c4a4dff5ea/volumes" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.514013 4803 generic.go:334] "Generic (PLEG): container finished" podID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerID="b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1" exitCode=0 Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.514253 4803 generic.go:334] "Generic (PLEG): container finished" podID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerID="e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164" exitCode=2 Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.514262 4803 generic.go:334] "Generic (PLEG): container finished" podID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerID="79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85" exitCode=0 Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.514280 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8d5f51-354d-4590-a83b-489e614f0c25","Type":"ContainerDied","Data":"b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1"} Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.514302 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8d5f51-354d-4590-a83b-489e614f0c25","Type":"ContainerDied","Data":"e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164"} Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.514313 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8d5f51-354d-4590-a83b-489e614f0c25","Type":"ContainerDied","Data":"79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85"} Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.554087 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6cb97b886d-8vwwj"] Jan 27 22:12:22 crc kubenswrapper[4803]: E0127 22:12:22.554696 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a043d332-9921-4219-9ad6-12e0cb2e31b9" containerName="mariadb-database-create" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.554717 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a043d332-9921-4219-9ad6-12e0cb2e31b9" containerName="mariadb-database-create" Jan 27 22:12:22 crc kubenswrapper[4803]: E0127 22:12:22.554727 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5ddc4c-65f5-4b87-b30c-6c63031f8826" containerName="mariadb-database-create" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.554736 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5ddc4c-65f5-4b87-b30c-6c63031f8826" containerName="mariadb-database-create" Jan 27 22:12:22 crc kubenswrapper[4803]: E0127 22:12:22.554755 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be52911-e65b-41f4-b207-efc49bc308d9" containerName="mariadb-account-create-update" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.554765 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be52911-e65b-41f4-b207-efc49bc308d9" containerName="mariadb-account-create-update" Jan 27 22:12:22 crc kubenswrapper[4803]: E0127 22:12:22.554781 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" containerName="neutron-api" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.554789 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" containerName="neutron-api" Jan 27 22:12:22 crc kubenswrapper[4803]: E0127 22:12:22.554823 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488bf67e-5edf-45f8-8ac9-a12e75646525" containerName="mariadb-database-create" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.554832 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="488bf67e-5edf-45f8-8ac9-a12e75646525" containerName="mariadb-database-create" Jan 27 22:12:22 crc kubenswrapper[4803]: E0127 22:12:22.562206 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4f1dd8-79ee-4832-9474-cabab5bc72e8" containerName="mariadb-account-create-update" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.562239 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4f1dd8-79ee-4832-9474-cabab5bc72e8" containerName="mariadb-account-create-update" Jan 27 22:12:22 crc kubenswrapper[4803]: E0127 22:12:22.562264 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" containerName="neutron-httpd" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.562271 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" containerName="neutron-httpd" Jan 27 22:12:22 crc kubenswrapper[4803]: E0127 22:12:22.562282 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f931c2-83c8-4d1a-88ff-4483d4aba42d" containerName="mariadb-account-create-update" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.562289 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f931c2-83c8-4d1a-88ff-4483d4aba42d" containerName="mariadb-account-create-update" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.562706 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" containerName="neutron-httpd" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.562732 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="488bf67e-5edf-45f8-8ac9-a12e75646525" containerName="mariadb-database-create" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.562742 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f931c2-83c8-4d1a-88ff-4483d4aba42d" containerName="mariadb-account-create-update" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.562756 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5ddc4c-65f5-4b87-b30c-6c63031f8826" containerName="mariadb-database-create" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.562767 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be52911-e65b-41f4-b207-efc49bc308d9" containerName="mariadb-account-create-update" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.562776 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a043d332-9921-4219-9ad6-12e0cb2e31b9" containerName="mariadb-database-create" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.562783 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4f1dd8-79ee-4832-9474-cabab5bc72e8" containerName="mariadb-account-create-update" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.562799 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1dfb047-0985-4a6f-955d-e5c4a4dff5ea" containerName="neutron-api" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.563561 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.568384 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.568542 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.568645 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-n55fx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.579445 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6cb97b886d-8vwwj"] Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.668011 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-combined-ca-bundle\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.668098 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data-custom\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.668124 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qn7\" (UniqueName: \"kubernetes.io/projected/6c26f757-3e53-46c6-be8c-4a052b5f86e2-kube-api-access-28qn7\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.668172 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.674690 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6977659f7b-ttxqx"] Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.676094 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.684198 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.701897 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-z6ndt"] Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.704036 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.728004 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6977659f7b-ttxqx"] Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.758030 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-z6ndt"] Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.773519 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-combined-ca-bundle\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.773758 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.773926 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-combined-ca-bundle\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.774078 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data-custom\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.774164 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28qn7\" (UniqueName: \"kubernetes.io/projected/6c26f757-3e53-46c6-be8c-4a052b5f86e2-kube-api-access-28qn7\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.774276 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfmxj\" (UniqueName: \"kubernetes.io/projected/694f20c4-bc76-42b5-b458-4e56227ca03d-kube-api-access-sfmxj\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.774353 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.774466 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data-custom\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.783922 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.785230 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data-custom\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.788994 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-combined-ca-bundle\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.795478 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28qn7\" (UniqueName: \"kubernetes.io/projected/6c26f757-3e53-46c6-be8c-4a052b5f86e2-kube-api-access-28qn7\") pod \"heat-engine-6cb97b886d-8vwwj\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.877907 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-combined-ca-bundle\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.877982 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.878048 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf2bx\" (UniqueName: \"kubernetes.io/projected/c8eef822-1016-48a2-8073-99d10757edf5-kube-api-access-rf2bx\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.878106 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.878179 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.878207 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.878231 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.878258 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfmxj\" (UniqueName: \"kubernetes.io/projected/694f20c4-bc76-42b5-b458-4e56227ca03d-kube-api-access-sfmxj\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.878305 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-config\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.878331 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data-custom\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.883726 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.884613 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-combined-ca-bundle\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.890377 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.894553 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data-custom\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.903034 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfmxj\" (UniqueName: \"kubernetes.io/projected/694f20c4-bc76-42b5-b458-4e56227ca03d-kube-api-access-sfmxj\") pod \"heat-cfnapi-6977659f7b-ttxqx\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.910901 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7db646bcb9-fl7xv"] Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.925991 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.942524 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.970920 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7db646bcb9-fl7xv"] Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.981574 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf2bx\" (UniqueName: \"kubernetes.io/projected/c8eef822-1016-48a2-8073-99d10757edf5-kube-api-access-rf2bx\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.981683 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.981765 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.981789 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.981809 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.981878 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-config\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.982933 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-config\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.983642 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.993066 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.993828 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:22 crc kubenswrapper[4803]: I0127 22:12:22.994294 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.003815 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.013233 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf2bx\" (UniqueName: \"kubernetes.io/projected/c8eef822-1016-48a2-8073-99d10757edf5-kube-api-access-rf2bx\") pod \"dnsmasq-dns-688b9f5b49-z6ndt\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.029912 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.084070 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-combined-ca-bundle\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.084254 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjzj4\" (UniqueName: \"kubernetes.io/projected/197e06e5-d60b-421f-8708-a8c5b87e4bb3-kube-api-access-vjzj4\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.084289 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.084325 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data-custom\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.186026 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjzj4\" (UniqueName: \"kubernetes.io/projected/197e06e5-d60b-421f-8708-a8c5b87e4bb3-kube-api-access-vjzj4\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.186714 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.187905 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data-custom\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.187967 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-combined-ca-bundle\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.194528 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data-custom\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.195233 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-combined-ca-bundle\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.201225 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.203802 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjzj4\" (UniqueName: \"kubernetes.io/projected/197e06e5-d60b-421f-8708-a8c5b87e4bb3-kube-api-access-vjzj4\") pod \"heat-api-7db646bcb9-fl7xv\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.257099 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.276831 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-54c764888c-dpmfw" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.362698 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.568411 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6977659f7b-ttxqx"] Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.595794 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6cb97b886d-8vwwj"] Jan 27 22:12:23 crc kubenswrapper[4803]: I0127 22:12:23.808307 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-z6ndt"] Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.016726 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7db646bcb9-fl7xv"] Jan 27 22:12:24 crc kubenswrapper[4803]: W0127 22:12:24.064976 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod197e06e5_d60b_421f_8708_a8c5b87e4bb3.slice/crio-cfbfc884cb63baeeee4f1d3bac2cf497ddda62c3a52e77539270e952d91f50b7 WatchSource:0}: Error finding container cfbfc884cb63baeeee4f1d3bac2cf497ddda62c3a52e77539270e952d91f50b7: Status 404 returned error can't find the container with id cfbfc884cb63baeeee4f1d3bac2cf497ddda62c3a52e77539270e952d91f50b7 Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.348901 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-klms9"] Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.350704 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.354239 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.354333 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.355911 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pmtb4" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.369119 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-klms9"] Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.442673 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-config-data\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.443003 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.443083 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-scripts\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.443121 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wnlq\" (UniqueName: \"kubernetes.io/projected/a00ff690-b44a-4a6e-9bf3-560344feda39-kube-api-access-8wnlq\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.470542 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.548490 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-log-httpd\") pod \"eb8d5f51-354d-4590-a83b-489e614f0c25\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.548823 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-run-httpd\") pod \"eb8d5f51-354d-4590-a83b-489e614f0c25\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.549025 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsnc2\" (UniqueName: \"kubernetes.io/projected/eb8d5f51-354d-4590-a83b-489e614f0c25-kube-api-access-lsnc2\") pod \"eb8d5f51-354d-4590-a83b-489e614f0c25\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.549138 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-combined-ca-bundle\") pod \"eb8d5f51-354d-4590-a83b-489e614f0c25\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.549214 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-config-data\") pod \"eb8d5f51-354d-4590-a83b-489e614f0c25\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.549315 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-scripts\") pod \"eb8d5f51-354d-4590-a83b-489e614f0c25\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.549426 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-sg-core-conf-yaml\") pod \"eb8d5f51-354d-4590-a83b-489e614f0c25\" (UID: \"eb8d5f51-354d-4590-a83b-489e614f0c25\") " Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.549827 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.549994 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-scripts\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.550114 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wnlq\" (UniqueName: \"kubernetes.io/projected/a00ff690-b44a-4a6e-9bf3-560344feda39-kube-api-access-8wnlq\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.550355 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-config-data\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.549064 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eb8d5f51-354d-4590-a83b-489e614f0c25" (UID: "eb8d5f51-354d-4590-a83b-489e614f0c25"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.549195 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eb8d5f51-354d-4590-a83b-489e614f0c25" (UID: "eb8d5f51-354d-4590-a83b-489e614f0c25"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.557559 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8d5f51-354d-4590-a83b-489e614f0c25-kube-api-access-lsnc2" (OuterVolumeSpecName: "kube-api-access-lsnc2") pod "eb8d5f51-354d-4590-a83b-489e614f0c25" (UID: "eb8d5f51-354d-4590-a83b-489e614f0c25"). InnerVolumeSpecName "kube-api-access-lsnc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.562201 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-config-data\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.563099 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-scripts" (OuterVolumeSpecName: "scripts") pod "eb8d5f51-354d-4590-a83b-489e614f0c25" (UID: "eb8d5f51-354d-4590-a83b-489e614f0c25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.563456 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.575206 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-scripts\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.578698 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wnlq\" (UniqueName: \"kubernetes.io/projected/a00ff690-b44a-4a6e-9bf3-560344feda39-kube-api-access-8wnlq\") pod \"nova-cell0-conductor-db-sync-klms9\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.583636 4803 generic.go:334] "Generic (PLEG): container finished" podID="c8eef822-1016-48a2-8073-99d10757edf5" containerID="a4cd8789282c3e67012cc35a05248a406727994e81356e6d30a12b19c08d74e8" exitCode=0 Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.583720 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" event={"ID":"c8eef822-1016-48a2-8073-99d10757edf5","Type":"ContainerDied","Data":"a4cd8789282c3e67012cc35a05248a406727994e81356e6d30a12b19c08d74e8"} Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.583747 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" event={"ID":"c8eef822-1016-48a2-8073-99d10757edf5","Type":"ContainerStarted","Data":"dab0327053d901e99214daf85cf81cc6c14ae5f05c66f7e9b90d442850cfa419"} Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.590651 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cb97b886d-8vwwj" event={"ID":"6c26f757-3e53-46c6-be8c-4a052b5f86e2","Type":"ContainerStarted","Data":"640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d"} Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.590722 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cb97b886d-8vwwj" event={"ID":"6c26f757-3e53-46c6-be8c-4a052b5f86e2","Type":"ContainerStarted","Data":"94e3cd78689e6653b326d16589b5bb552d16215707a57b71a10bba8af0e1f7d8"} Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.591085 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.600516 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7db646bcb9-fl7xv" event={"ID":"197e06e5-d60b-421f-8708-a8c5b87e4bb3","Type":"ContainerStarted","Data":"cfbfc884cb63baeeee4f1d3bac2cf497ddda62c3a52e77539270e952d91f50b7"} Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.622414 4803 generic.go:334] "Generic (PLEG): container finished" podID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerID="80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc" exitCode=0 Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.622536 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8d5f51-354d-4590-a83b-489e614f0c25","Type":"ContainerDied","Data":"80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc"} Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.622584 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8d5f51-354d-4590-a83b-489e614f0c25","Type":"ContainerDied","Data":"a8faf57e6ea64b38ce43d8784041f478c5341966e9d25b9895d53af4cdb6bd30"} Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.622605 4803 scope.go:117] "RemoveContainer" containerID="b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.622827 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.648806 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" event={"ID":"694f20c4-bc76-42b5-b458-4e56227ca03d","Type":"ContainerStarted","Data":"a7dbba77ab26a85397b657c4ddcf0e882ffa8ac6ead7936541ab76bf055bab12"} Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.653175 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eb8d5f51-354d-4590-a83b-489e614f0c25" (UID: "eb8d5f51-354d-4590-a83b-489e614f0c25"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.653909 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.654115 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.654264 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.654357 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8d5f51-354d-4590-a83b-489e614f0c25-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.654436 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsnc2\" (UniqueName: \"kubernetes.io/projected/eb8d5f51-354d-4590-a83b-489e614f0c25-kube-api-access-lsnc2\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.677915 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6cb97b886d-8vwwj" podStartSLOduration=2.677890075 podStartE2EDuration="2.677890075s" podCreationTimestamp="2026-01-27 22:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:12:24.646332286 +0000 UTC m=+1497.062353995" watchObservedRunningTime="2026-01-27 22:12:24.677890075 +0000 UTC m=+1497.093911774" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.690754 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.690964 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb8d5f51-354d-4590-a83b-489e614f0c25" (UID: "eb8d5f51-354d-4590-a83b-489e614f0c25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.694077 4803 scope.go:117] "RemoveContainer" containerID="e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.759653 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.772459 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-config-data" (OuterVolumeSpecName: "config-data") pod "eb8d5f51-354d-4590-a83b-489e614f0c25" (UID: "eb8d5f51-354d-4590-a83b-489e614f0c25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.809210 4803 scope.go:117] "RemoveContainer" containerID="79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85" Jan 27 22:12:24 crc kubenswrapper[4803]: I0127 22:12:24.862773 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8d5f51-354d-4590-a83b-489e614f0c25-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.034505 4803 scope.go:117] "RemoveContainer" containerID="80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.125966 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.147980 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.168769 4803 scope.go:117] "RemoveContainer" containerID="b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1" Jan 27 22:12:25 crc kubenswrapper[4803]: E0127 22:12:25.169418 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1\": container with ID starting with b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1 not found: ID does not exist" containerID="b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.169471 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1"} err="failed to get container status \"b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1\": rpc error: code = NotFound desc = could not find container \"b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1\": container with ID starting with b5a98be7d02ce54450ab96cf9d007e6a4c64743dc7809f80ec35887817801ad1 not found: ID does not exist" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.169498 4803 scope.go:117] "RemoveContainer" containerID="e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164" Jan 27 22:12:25 crc kubenswrapper[4803]: E0127 22:12:25.171929 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164\": container with ID starting with e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164 not found: ID does not exist" containerID="e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.172062 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164"} err="failed to get container status \"e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164\": rpc error: code = NotFound desc = could not find container \"e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164\": container with ID starting with e1e1c14485509d415506affb81d9fdefd4ca7f6beaa7b2f66ee191c205e65164 not found: ID does not exist" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.172093 4803 scope.go:117] "RemoveContainer" containerID="79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85" Jan 27 22:12:25 crc kubenswrapper[4803]: E0127 22:12:25.172611 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85\": container with ID starting with 79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85 not found: ID does not exist" containerID="79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.172629 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85"} err="failed to get container status \"79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85\": rpc error: code = NotFound desc = could not find container \"79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85\": container with ID starting with 79745ed7e1956f53ad01e6a5504eb30497866438b117e829dbba55898648be85 not found: ID does not exist" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.172644 4803 scope.go:117] "RemoveContainer" containerID="80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc" Jan 27 22:12:25 crc kubenswrapper[4803]: E0127 22:12:25.173316 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc\": container with ID starting with 80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc not found: ID does not exist" containerID="80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.173370 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc"} err="failed to get container status \"80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc\": rpc error: code = NotFound desc = could not find container \"80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc\": container with ID starting with 80bf44acbf88ca0d5c09ada33c356d0d5a38b3e14b9f779ccc1b7ec347a367fc not found: ID does not exist" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.179622 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:25 crc kubenswrapper[4803]: E0127 22:12:25.181687 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="sg-core" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.181724 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="sg-core" Jan 27 22:12:25 crc kubenswrapper[4803]: E0127 22:12:25.181786 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="ceilometer-notification-agent" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.181797 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="ceilometer-notification-agent" Jan 27 22:12:25 crc kubenswrapper[4803]: E0127 22:12:25.181830 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="ceilometer-central-agent" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.181857 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="ceilometer-central-agent" Jan 27 22:12:25 crc kubenswrapper[4803]: E0127 22:12:25.181896 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="proxy-httpd" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.181905 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="proxy-httpd" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.182537 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="proxy-httpd" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.182567 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="sg-core" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.182595 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="ceilometer-notification-agent" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.182630 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" containerName="ceilometer-central-agent" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.205684 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.208228 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.228904 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.228939 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.393461 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.393590 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c59r\" (UniqueName: \"kubernetes.io/projected/df9892cf-8ada-42c4-a4bf-b9c9416515d9-kube-api-access-2c59r\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.393612 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-run-httpd\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.393652 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.393700 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-log-httpd\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.393755 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-scripts\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.393838 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-config-data\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.405867 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-klms9"] Jan 27 22:12:25 crc kubenswrapper[4803]: W0127 22:12:25.409202 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda00ff690_b44a_4a6e_9bf3_560344feda39.slice/crio-7b67a17a19dbaeb7634d2d13ececdb992ab4fa60fa495282f6a95cf9cb041c9a WatchSource:0}: Error finding container 7b67a17a19dbaeb7634d2d13ececdb992ab4fa60fa495282f6a95cf9cb041c9a: Status 404 returned error can't find the container with id 7b67a17a19dbaeb7634d2d13ececdb992ab4fa60fa495282f6a95cf9cb041c9a Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.496026 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-scripts\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.496126 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-config-data\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.496206 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.496259 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c59r\" (UniqueName: \"kubernetes.io/projected/df9892cf-8ada-42c4-a4bf-b9c9416515d9-kube-api-access-2c59r\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.496281 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-run-httpd\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.496300 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.496348 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-log-httpd\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.496790 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-log-httpd\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.497697 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-run-httpd\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.506862 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-config-data\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.508994 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.510708 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-scripts\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.510708 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.520175 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c59r\" (UniqueName: \"kubernetes.io/projected/df9892cf-8ada-42c4-a4bf-b9c9416515d9-kube-api-access-2c59r\") pod \"ceilometer-0\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.598922 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.677823 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-klms9" event={"ID":"a00ff690-b44a-4a6e-9bf3-560344feda39","Type":"ContainerStarted","Data":"7b67a17a19dbaeb7634d2d13ececdb992ab4fa60fa495282f6a95cf9cb041c9a"} Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.683378 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" event={"ID":"c8eef822-1016-48a2-8073-99d10757edf5","Type":"ContainerStarted","Data":"dc6f07943553d51f747eb4007c810e41784703f83f0fd88387073fd56463eb6b"} Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.684154 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.717926 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" podStartSLOduration=3.717905681 podStartE2EDuration="3.717905681s" podCreationTimestamp="2026-01-27 22:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:12:25.70787085 +0000 UTC m=+1498.123892589" watchObservedRunningTime="2026-01-27 22:12:25.717905681 +0000 UTC m=+1498.133927380" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.982386 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:25 crc kubenswrapper[4803]: I0127 22:12:25.982431 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:26 crc kubenswrapper[4803]: I0127 22:12:26.022217 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:26 crc kubenswrapper[4803]: I0127 22:12:26.031308 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:26 crc kubenswrapper[4803]: I0127 22:12:26.330131 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8d5f51-354d-4590-a83b-489e614f0c25" path="/var/lib/kubelet/pods/eb8d5f51-354d-4590-a83b-489e614f0c25/volumes" Jan 27 22:12:26 crc kubenswrapper[4803]: I0127 22:12:26.699708 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:26 crc kubenswrapper[4803]: I0127 22:12:26.699740 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:26 crc kubenswrapper[4803]: I0127 22:12:26.931628 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 22:12:26 crc kubenswrapper[4803]: I0127 22:12:26.932169 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.167468 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.202312 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.260612 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.592551 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jqw45" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="registry-server" probeResult="failure" output=< Jan 27 22:12:27 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:12:27 crc kubenswrapper[4803]: > Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.718385 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df9892cf-8ada-42c4-a4bf-b9c9416515d9","Type":"ContainerStarted","Data":"fe27424531d5544e91c23d73597fa87c2b471616e815a92a312f10bdc5012057"} Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.721591 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7db646bcb9-fl7xv" event={"ID":"197e06e5-d60b-421f-8708-a8c5b87e4bb3","Type":"ContainerStarted","Data":"965d72d7e4d0d24ea8c1bef43bae83c921274120e511eb0a3edddc7f9d0259f7"} Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.723053 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.727285 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" event={"ID":"694f20c4-bc76-42b5-b458-4e56227ca03d","Type":"ContainerStarted","Data":"2d2ebc85fca8e2f904feda71e595bf4bd7df830ac7424287db94e9de50b8ceb0"} Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.727634 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.727765 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.727926 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.740434 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7db646bcb9-fl7xv" podStartSLOduration=3.183234378 podStartE2EDuration="5.740417616s" podCreationTimestamp="2026-01-27 22:12:22 +0000 UTC" firstStartedPulling="2026-01-27 22:12:24.092076946 +0000 UTC m=+1496.508098645" lastFinishedPulling="2026-01-27 22:12:26.649260184 +0000 UTC m=+1499.065281883" observedRunningTime="2026-01-27 22:12:27.735021451 +0000 UTC m=+1500.151043150" watchObservedRunningTime="2026-01-27 22:12:27.740417616 +0000 UTC m=+1500.156439315" Jan 27 22:12:27 crc kubenswrapper[4803]: I0127 22:12:27.789910 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" podStartSLOduration=2.728061625 podStartE2EDuration="5.789865407s" podCreationTimestamp="2026-01-27 22:12:22 +0000 UTC" firstStartedPulling="2026-01-27 22:12:23.581879882 +0000 UTC m=+1495.997901581" lastFinishedPulling="2026-01-27 22:12:26.643683654 +0000 UTC m=+1499.059705363" observedRunningTime="2026-01-27 22:12:27.766914138 +0000 UTC m=+1500.182935847" watchObservedRunningTime="2026-01-27 22:12:27.789865407 +0000 UTC m=+1500.205887106" Jan 27 22:12:28 crc kubenswrapper[4803]: I0127 22:12:28.759021 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df9892cf-8ada-42c4-a4bf-b9c9416515d9","Type":"ContainerStarted","Data":"7eeae033d86cdd944b894bd630460001698191db360ccca4f00d8f3ce90ec9a2"} Jan 27 22:12:29 crc kubenswrapper[4803]: I0127 22:12:29.789325 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df9892cf-8ada-42c4-a4bf-b9c9416515d9","Type":"ContainerStarted","Data":"f0bf9f7050fabf76dd2daa11c5ae5f9a042d72b9923c4aefb750da28c8cd91bc"} Jan 27 22:12:29 crc kubenswrapper[4803]: I0127 22:12:29.789888 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 22:12:29 crc kubenswrapper[4803]: I0127 22:12:29.789899 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 22:12:29 crc kubenswrapper[4803]: I0127 22:12:29.935761 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:29 crc kubenswrapper[4803]: I0127 22:12:29.936073 4803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 22:12:29 crc kubenswrapper[4803]: I0127 22:12:29.937972 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 22:12:30 crc kubenswrapper[4803]: I0127 22:12:30.282514 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 22:12:30 crc kubenswrapper[4803]: I0127 22:12:30.717555 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 22:12:30 crc kubenswrapper[4803]: I0127 22:12:30.830750 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df9892cf-8ada-42c4-a4bf-b9c9416515d9","Type":"ContainerStarted","Data":"11912b9cb354be3b233f45b171a4908615c56d3c7b8b0f895b1b2f45c00abd55"} Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.847923 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df9892cf-8ada-42c4-a4bf-b9c9416515d9","Type":"ContainerStarted","Data":"6a0b0e773bdf6938a1a7cbbdeb1b3c70acd66fc3a26cdb60e805e623e2bcf614"} Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.848307 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.853081 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7cfbfb9f4d-z24kh"] Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.854585 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.866524 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7cfbfb9f4d-z24kh"] Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.931479 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.734694823 podStartE2EDuration="6.931452345s" podCreationTimestamp="2026-01-27 22:12:25 +0000 UTC" firstStartedPulling="2026-01-27 22:12:27.298636452 +0000 UTC m=+1499.714658151" lastFinishedPulling="2026-01-27 22:12:31.495393974 +0000 UTC m=+1503.911415673" observedRunningTime="2026-01-27 22:12:31.898224031 +0000 UTC m=+1504.314245730" watchObservedRunningTime="2026-01-27 22:12:31.931452345 +0000 UTC m=+1504.347474054" Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.936357 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-bdfb8f445-vd7f5"] Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.937936 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.961839 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data-custom\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.961900 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.962177 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-combined-ca-bundle\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.962196 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmg64\" (UniqueName: \"kubernetes.io/projected/552f794c-b47b-4f78-9f79-d989e7b621d7-kube-api-access-wmg64\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:31 crc kubenswrapper[4803]: I0127 22:12:31.982172 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-bdfb8f445-vd7f5"] Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.055073 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-68bc78f5bb-r5jpw"] Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.062080 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.077304 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-combined-ca-bundle\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.077369 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrp5f\" (UniqueName: \"kubernetes.io/projected/fa8732df-6c17-4d1f-9962-7c54b4809cb5-kube-api-access-qrp5f\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.077458 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.077605 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-combined-ca-bundle\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.077626 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmg64\" (UniqueName: \"kubernetes.io/projected/552f794c-b47b-4f78-9f79-d989e7b621d7-kube-api-access-wmg64\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.077699 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data-custom\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.077743 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.077787 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data-custom\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.117823 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-combined-ca-bundle\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.122512 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.125136 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmg64\" (UniqueName: \"kubernetes.io/projected/552f794c-b47b-4f78-9f79-d989e7b621d7-kube-api-access-wmg64\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.138444 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-68bc78f5bb-r5jpw"] Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.143853 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data-custom\") pod \"heat-engine-7cfbfb9f4d-z24kh\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.185343 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-combined-ca-bundle\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.185438 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-combined-ca-bundle\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.185463 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrp5f\" (UniqueName: \"kubernetes.io/projected/fa8732df-6c17-4d1f-9962-7c54b4809cb5-kube-api-access-qrp5f\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.185492 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9knv\" (UniqueName: \"kubernetes.io/projected/dd41e8ae-8eec-474a-8036-6bb7372dbd80-kube-api-access-v9knv\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.185510 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.185570 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data-custom\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.185616 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.185633 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data-custom\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.192140 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-combined-ca-bundle\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.193381 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data-custom\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.203006 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.214024 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.231004 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrp5f\" (UniqueName: \"kubernetes.io/projected/fa8732df-6c17-4d1f-9962-7c54b4809cb5-kube-api-access-qrp5f\") pod \"heat-api-bdfb8f445-vd7f5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.278791 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.293812 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.293988 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-combined-ca-bundle\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.294139 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9knv\" (UniqueName: \"kubernetes.io/projected/dd41e8ae-8eec-474a-8036-6bb7372dbd80-kube-api-access-v9knv\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.294655 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data-custom\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.298714 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data-custom\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.305041 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.310482 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-combined-ca-bundle\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.318730 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9knv\" (UniqueName: \"kubernetes.io/projected/dd41e8ae-8eec-474a-8036-6bb7372dbd80-kube-api-access-v9knv\") pod \"heat-cfnapi-68bc78f5bb-r5jpw\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.541425 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.831626 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7cfbfb9f4d-z24kh"] Jan 27 22:12:32 crc kubenswrapper[4803]: I0127 22:12:32.938269 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-bdfb8f445-vd7f5"] Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.034051 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.137609 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g8cxd"] Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.138191 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" podUID="f5bae15d-ce93-43a9-8fc4-49200676a31d" containerName="dnsmasq-dns" containerID="cri-o://02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508" gracePeriod=10 Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.205544 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-68bc78f5bb-r5jpw"] Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.830844 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.914532 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" event={"ID":"552f794c-b47b-4f78-9f79-d989e7b621d7","Type":"ContainerStarted","Data":"09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab"} Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.914581 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" event={"ID":"552f794c-b47b-4f78-9f79-d989e7b621d7","Type":"ContainerStarted","Data":"6e14a98b725f9ebdc7dd8a725c70b133134f40b82341eda4a7acf00aa786780e"} Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.916940 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.925077 4803 generic.go:334] "Generic (PLEG): container finished" podID="f5bae15d-ce93-43a9-8fc4-49200676a31d" containerID="02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508" exitCode=0 Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.925139 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" event={"ID":"f5bae15d-ce93-43a9-8fc4-49200676a31d","Type":"ContainerDied","Data":"02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508"} Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.925171 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" event={"ID":"f5bae15d-ce93-43a9-8fc4-49200676a31d","Type":"ContainerDied","Data":"c49231532faccce31ffcc8a6d7fe45e93cc66644c031ddf12a0aab35284bbadc"} Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.925190 4803 scope.go:117] "RemoveContainer" containerID="02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508" Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.925313 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g8cxd" Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.936701 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" podStartSLOduration=2.936688755 podStartE2EDuration="2.936688755s" podCreationTimestamp="2026-01-27 22:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:12:33.934200898 +0000 UTC m=+1506.350222597" watchObservedRunningTime="2026-01-27 22:12:33.936688755 +0000 UTC m=+1506.352710454" Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.943062 4803 generic.go:334] "Generic (PLEG): container finished" podID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" containerID="6eae4a607ad40963e7843f97a64c29f9a745beb513fb2653d97453ffc9f09ae3" exitCode=1 Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.943145 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bdfb8f445-vd7f5" event={"ID":"fa8732df-6c17-4d1f-9962-7c54b4809cb5","Type":"ContainerDied","Data":"6eae4a607ad40963e7843f97a64c29f9a745beb513fb2653d97453ffc9f09ae3"} Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.943169 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bdfb8f445-vd7f5" event={"ID":"fa8732df-6c17-4d1f-9962-7c54b4809cb5","Type":"ContainerStarted","Data":"d5bdedb3eecdd0fe561f8372100a0d0132064863adde617b80b15c2f9135e80a"} Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.952324 4803 scope.go:117] "RemoveContainer" containerID="6eae4a607ad40963e7843f97a64c29f9a745beb513fb2653d97453ffc9f09ae3" Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.956612 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-nb\") pod \"f5bae15d-ce93-43a9-8fc4-49200676a31d\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.956701 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-svc\") pod \"f5bae15d-ce93-43a9-8fc4-49200676a31d\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.956820 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-swift-storage-0\") pod \"f5bae15d-ce93-43a9-8fc4-49200676a31d\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.956856 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg9hl\" (UniqueName: \"kubernetes.io/projected/f5bae15d-ce93-43a9-8fc4-49200676a31d-kube-api-access-vg9hl\") pod \"f5bae15d-ce93-43a9-8fc4-49200676a31d\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.956962 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-sb\") pod \"f5bae15d-ce93-43a9-8fc4-49200676a31d\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.957014 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-config\") pod \"f5bae15d-ce93-43a9-8fc4-49200676a31d\" (UID: \"f5bae15d-ce93-43a9-8fc4-49200676a31d\") " Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.968557 4803 scope.go:117] "RemoveContainer" containerID="82f3a3bf64ccb8a308714f10b0c423e51e7db24b763769ae50285a8b80267985" Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.968659 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" event={"ID":"dd41e8ae-8eec-474a-8036-6bb7372dbd80","Type":"ContainerStarted","Data":"82f3a3bf64ccb8a308714f10b0c423e51e7db24b763769ae50285a8b80267985"} Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.972171 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" event={"ID":"dd41e8ae-8eec-474a-8036-6bb7372dbd80","Type":"ContainerStarted","Data":"3f2c5fd3bfd6d060e4e166a5c21bba28299e2d34b36725718bf627519055f04a"} Jan 27 22:12:33 crc kubenswrapper[4803]: I0127 22:12:33.990582 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5bae15d-ce93-43a9-8fc4-49200676a31d-kube-api-access-vg9hl" (OuterVolumeSpecName: "kube-api-access-vg9hl") pod "f5bae15d-ce93-43a9-8fc4-49200676a31d" (UID: "f5bae15d-ce93-43a9-8fc4-49200676a31d"). InnerVolumeSpecName "kube-api-access-vg9hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.068572 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg9hl\" (UniqueName: \"kubernetes.io/projected/f5bae15d-ce93-43a9-8fc4-49200676a31d-kube-api-access-vg9hl\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.112409 4803 scope.go:117] "RemoveContainer" containerID="94d2b53bca0a76ae46cac772ed7d80b1de8b0e0f4ea481215e9667db782a9193" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.141606 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-config" (OuterVolumeSpecName: "config") pod "f5bae15d-ce93-43a9-8fc4-49200676a31d" (UID: "f5bae15d-ce93-43a9-8fc4-49200676a31d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.181057 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.198771 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5bae15d-ce93-43a9-8fc4-49200676a31d" (UID: "f5bae15d-ce93-43a9-8fc4-49200676a31d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.229031 4803 scope.go:117] "RemoveContainer" containerID="02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508" Jan 27 22:12:34 crc kubenswrapper[4803]: E0127 22:12:34.233340 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508\": container with ID starting with 02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508 not found: ID does not exist" containerID="02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.233398 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508"} err="failed to get container status \"02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508\": rpc error: code = NotFound desc = could not find container \"02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508\": container with ID starting with 02efba454f161b20295a2afd5ad12acd67ed61a6fb2a3f66b2c19adfcf510508 not found: ID does not exist" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.233429 4803 scope.go:117] "RemoveContainer" containerID="94d2b53bca0a76ae46cac772ed7d80b1de8b0e0f4ea481215e9667db782a9193" Jan 27 22:12:34 crc kubenswrapper[4803]: E0127 22:12:34.244288 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94d2b53bca0a76ae46cac772ed7d80b1de8b0e0f4ea481215e9667db782a9193\": container with ID starting with 94d2b53bca0a76ae46cac772ed7d80b1de8b0e0f4ea481215e9667db782a9193 not found: ID does not exist" containerID="94d2b53bca0a76ae46cac772ed7d80b1de8b0e0f4ea481215e9667db782a9193" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.244335 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94d2b53bca0a76ae46cac772ed7d80b1de8b0e0f4ea481215e9667db782a9193"} err="failed to get container status \"94d2b53bca0a76ae46cac772ed7d80b1de8b0e0f4ea481215e9667db782a9193\": rpc error: code = NotFound desc = could not find container \"94d2b53bca0a76ae46cac772ed7d80b1de8b0e0f4ea481215e9667db782a9193\": container with ID starting with 94d2b53bca0a76ae46cac772ed7d80b1de8b0e0f4ea481215e9667db782a9193 not found: ID does not exist" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.284056 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.381722 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5bae15d-ce93-43a9-8fc4-49200676a31d" (UID: "f5bae15d-ce93-43a9-8fc4-49200676a31d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.386665 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.432778 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5bae15d-ce93-43a9-8fc4-49200676a31d" (UID: "f5bae15d-ce93-43a9-8fc4-49200676a31d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.458050 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f5bae15d-ce93-43a9-8fc4-49200676a31d" (UID: "f5bae15d-ce93-43a9-8fc4-49200676a31d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.489389 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.489729 4803 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5bae15d-ce93-43a9-8fc4-49200676a31d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.632120 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g8cxd"] Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.656697 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g8cxd"] Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.990726 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bdfb8f445-vd7f5" event={"ID":"fa8732df-6c17-4d1f-9962-7c54b4809cb5","Type":"ContainerStarted","Data":"2d08be016329c21848ed0bc07936d9457ad377d608723a2e9a3dc95de34c840f"} Jan 27 22:12:34 crc kubenswrapper[4803]: I0127 22:12:34.991458 4803 scope.go:117] "RemoveContainer" containerID="2d08be016329c21848ed0bc07936d9457ad377d608723a2e9a3dc95de34c840f" Jan 27 22:12:34 crc kubenswrapper[4803]: E0127 22:12:34.991770 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-bdfb8f445-vd7f5_openstack(fa8732df-6c17-4d1f-9962-7c54b4809cb5)\"" pod="openstack/heat-api-bdfb8f445-vd7f5" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.029742 4803 generic.go:334] "Generic (PLEG): container finished" podID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerID="82f3a3bf64ccb8a308714f10b0c423e51e7db24b763769ae50285a8b80267985" exitCode=1 Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.030262 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" event={"ID":"dd41e8ae-8eec-474a-8036-6bb7372dbd80","Type":"ContainerDied","Data":"82f3a3bf64ccb8a308714f10b0c423e51e7db24b763769ae50285a8b80267985"} Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.030381 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" event={"ID":"dd41e8ae-8eec-474a-8036-6bb7372dbd80","Type":"ContainerStarted","Data":"9372383e0445b4dab23e8e94b962d9e05709307e6a343060f8c2a75e8d410065"} Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.030457 4803 scope.go:117] "RemoveContainer" containerID="82f3a3bf64ccb8a308714f10b0c423e51e7db24b763769ae50285a8b80267985" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.030854 4803 scope.go:117] "RemoveContainer" containerID="9372383e0445b4dab23e8e94b962d9e05709307e6a343060f8c2a75e8d410065" Jan 27 22:12:35 crc kubenswrapper[4803]: E0127 22:12:35.031159 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-68bc78f5bb-r5jpw_openstack(dd41e8ae-8eec-474a-8036-6bb7372dbd80)\"" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.428061 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7db646bcb9-fl7xv"] Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.428745 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7db646bcb9-fl7xv" podUID="197e06e5-d60b-421f-8708-a8c5b87e4bb3" containerName="heat-api" containerID="cri-o://965d72d7e4d0d24ea8c1bef43bae83c921274120e511eb0a3edddc7f9d0259f7" gracePeriod=60 Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.440229 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-7db646bcb9-fl7xv" podUID="197e06e5-d60b-421f-8708-a8c5b87e4bb3" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.229:8004/healthcheck\": EOF" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.440390 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7db646bcb9-fl7xv" podUID="197e06e5-d60b-421f-8708-a8c5b87e4bb3" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.229:8004/healthcheck\": EOF" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.456516 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6977659f7b-ttxqx"] Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.456759 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" podUID="694f20c4-bc76-42b5-b458-4e56227ca03d" containerName="heat-cfnapi" containerID="cri-o://2d2ebc85fca8e2f904feda71e595bf4bd7df830ac7424287db94e9de50b8ceb0" gracePeriod=60 Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.468794 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" podUID="694f20c4-bc76-42b5-b458-4e56227ca03d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.227:8000/healthcheck\": EOF" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.468949 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" podUID="694f20c4-bc76-42b5-b458-4e56227ca03d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.227:8000/healthcheck\": EOF" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.473687 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6cd7d794d7-nf5gr"] Jan 27 22:12:35 crc kubenswrapper[4803]: E0127 22:12:35.474426 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bae15d-ce93-43a9-8fc4-49200676a31d" containerName="init" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.474515 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bae15d-ce93-43a9-8fc4-49200676a31d" containerName="init" Jan 27 22:12:35 crc kubenswrapper[4803]: E0127 22:12:35.474605 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bae15d-ce93-43a9-8fc4-49200676a31d" containerName="dnsmasq-dns" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.474663 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bae15d-ce93-43a9-8fc4-49200676a31d" containerName="dnsmasq-dns" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.475027 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5bae15d-ce93-43a9-8fc4-49200676a31d" containerName="dnsmasq-dns" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.476020 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.478191 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.482944 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.497915 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cd7d794d7-nf5gr"] Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.500810 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" podUID="694f20c4-bc76-42b5-b458-4e56227ca03d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.227:8000/healthcheck\": EOF" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.523852 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2sjs\" (UniqueName: \"kubernetes.io/projected/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-kube-api-access-q2sjs\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.523942 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-internal-tls-certs\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.523982 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-public-tls-certs\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.524024 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-combined-ca-bundle\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.524131 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data-custom\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.524182 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.527943 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6c55c9f8f8-s8fzg"] Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.529605 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.534517 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.534543 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.538900 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c55c9f8f8-s8fzg"] Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626418 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-internal-tls-certs\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626498 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data-custom\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626520 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-combined-ca-bundle\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626571 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data-custom\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626635 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626678 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjsn6\" (UniqueName: \"kubernetes.io/projected/6211e4d6-a2aa-4243-9951-906324729104-kube-api-access-jjsn6\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626697 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626715 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2sjs\" (UniqueName: \"kubernetes.io/projected/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-kube-api-access-q2sjs\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626748 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-internal-tls-certs\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626781 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-public-tls-certs\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626805 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-public-tls-certs\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.626831 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-combined-ca-bundle\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.636415 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-combined-ca-bundle\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.636503 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-internal-tls-certs\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.637830 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.639375 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-public-tls-certs\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.640514 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data-custom\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.644461 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2sjs\" (UniqueName: \"kubernetes.io/projected/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-kube-api-access-q2sjs\") pod \"heat-api-6cd7d794d7-nf5gr\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.729097 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-public-tls-certs\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.729186 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-internal-tls-certs\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.729240 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data-custom\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.729267 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-combined-ca-bundle\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.729374 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjsn6\" (UniqueName: \"kubernetes.io/projected/6211e4d6-a2aa-4243-9951-906324729104-kube-api-access-jjsn6\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.729402 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.733475 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-internal-tls-certs\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.734072 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-combined-ca-bundle\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.734150 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-public-tls-certs\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.738621 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data-custom\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.739318 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.750248 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjsn6\" (UniqueName: \"kubernetes.io/projected/6211e4d6-a2aa-4243-9951-906324729104-kube-api-access-jjsn6\") pod \"heat-cfnapi-6c55c9f8f8-s8fzg\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.825816 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:35 crc kubenswrapper[4803]: I0127 22:12:35.857710 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:36 crc kubenswrapper[4803]: I0127 22:12:36.046012 4803 generic.go:334] "Generic (PLEG): container finished" podID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" containerID="2d08be016329c21848ed0bc07936d9457ad377d608723a2e9a3dc95de34c840f" exitCode=1 Jan 27 22:12:36 crc kubenswrapper[4803]: I0127 22:12:36.046112 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bdfb8f445-vd7f5" event={"ID":"fa8732df-6c17-4d1f-9962-7c54b4809cb5","Type":"ContainerDied","Data":"2d08be016329c21848ed0bc07936d9457ad377d608723a2e9a3dc95de34c840f"} Jan 27 22:12:36 crc kubenswrapper[4803]: I0127 22:12:36.046146 4803 scope.go:117] "RemoveContainer" containerID="6eae4a607ad40963e7843f97a64c29f9a745beb513fb2653d97453ffc9f09ae3" Jan 27 22:12:36 crc kubenswrapper[4803]: I0127 22:12:36.046842 4803 scope.go:117] "RemoveContainer" containerID="2d08be016329c21848ed0bc07936d9457ad377d608723a2e9a3dc95de34c840f" Jan 27 22:12:36 crc kubenswrapper[4803]: E0127 22:12:36.047308 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-bdfb8f445-vd7f5_openstack(fa8732df-6c17-4d1f-9962-7c54b4809cb5)\"" pod="openstack/heat-api-bdfb8f445-vd7f5" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" Jan 27 22:12:36 crc kubenswrapper[4803]: I0127 22:12:36.056704 4803 generic.go:334] "Generic (PLEG): container finished" podID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerID="9372383e0445b4dab23e8e94b962d9e05709307e6a343060f8c2a75e8d410065" exitCode=1 Jan 27 22:12:36 crc kubenswrapper[4803]: I0127 22:12:36.056745 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" event={"ID":"dd41e8ae-8eec-474a-8036-6bb7372dbd80","Type":"ContainerDied","Data":"9372383e0445b4dab23e8e94b962d9e05709307e6a343060f8c2a75e8d410065"} Jan 27 22:12:36 crc kubenswrapper[4803]: I0127 22:12:36.057478 4803 scope.go:117] "RemoveContainer" containerID="9372383e0445b4dab23e8e94b962d9e05709307e6a343060f8c2a75e8d410065" Jan 27 22:12:36 crc kubenswrapper[4803]: E0127 22:12:36.057696 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-68bc78f5bb-r5jpw_openstack(dd41e8ae-8eec-474a-8036-6bb7372dbd80)\"" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" Jan 27 22:12:36 crc kubenswrapper[4803]: I0127 22:12:36.362489 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5bae15d-ce93-43a9-8fc4-49200676a31d" path="/var/lib/kubelet/pods/f5bae15d-ce93-43a9-8fc4-49200676a31d/volumes" Jan 27 22:12:36 crc kubenswrapper[4803]: I0127 22:12:36.631638 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:12:36 crc kubenswrapper[4803]: I0127 22:12:36.718300 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:12:36 crc kubenswrapper[4803]: I0127 22:12:36.891972 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jqw45"] Jan 27 22:12:37 crc kubenswrapper[4803]: I0127 22:12:37.068265 4803 scope.go:117] "RemoveContainer" containerID="9372383e0445b4dab23e8e94b962d9e05709307e6a343060f8c2a75e8d410065" Jan 27 22:12:37 crc kubenswrapper[4803]: I0127 22:12:37.068622 4803 scope.go:117] "RemoveContainer" containerID="2d08be016329c21848ed0bc07936d9457ad377d608723a2e9a3dc95de34c840f" Jan 27 22:12:37 crc kubenswrapper[4803]: E0127 22:12:37.068888 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-68bc78f5bb-r5jpw_openstack(dd41e8ae-8eec-474a-8036-6bb7372dbd80)\"" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" Jan 27 22:12:37 crc kubenswrapper[4803]: E0127 22:12:37.068959 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-bdfb8f445-vd7f5_openstack(fa8732df-6c17-4d1f-9962-7c54b4809cb5)\"" pod="openstack/heat-api-bdfb8f445-vd7f5" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" Jan 27 22:12:37 crc kubenswrapper[4803]: I0127 22:12:37.279728 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:37 crc kubenswrapper[4803]: I0127 22:12:37.279873 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:37 crc kubenswrapper[4803]: I0127 22:12:37.542321 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:37 crc kubenswrapper[4803]: I0127 22:12:37.542400 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:37 crc kubenswrapper[4803]: I0127 22:12:37.772013 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:37 crc kubenswrapper[4803]: I0127 22:12:37.772337 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="ceilometer-central-agent" containerID="cri-o://7eeae033d86cdd944b894bd630460001698191db360ccca4f00d8f3ce90ec9a2" gracePeriod=30 Jan 27 22:12:37 crc kubenswrapper[4803]: I0127 22:12:37.772395 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="proxy-httpd" containerID="cri-o://6a0b0e773bdf6938a1a7cbbdeb1b3c70acd66fc3a26cdb60e805e623e2bcf614" gracePeriod=30 Jan 27 22:12:37 crc kubenswrapper[4803]: I0127 22:12:37.772483 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="sg-core" containerID="cri-o://11912b9cb354be3b233f45b171a4908615c56d3c7b8b0f895b1b2f45c00abd55" gracePeriod=30 Jan 27 22:12:37 crc kubenswrapper[4803]: I0127 22:12:37.772533 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="ceilometer-notification-agent" containerID="cri-o://f0bf9f7050fabf76dd2daa11c5ae5f9a042d72b9923c4aefb750da28c8cd91bc" gracePeriod=30 Jan 27 22:12:38 crc kubenswrapper[4803]: I0127 22:12:38.082786 4803 generic.go:334] "Generic (PLEG): container finished" podID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerID="6a0b0e773bdf6938a1a7cbbdeb1b3c70acd66fc3a26cdb60e805e623e2bcf614" exitCode=0 Jan 27 22:12:38 crc kubenswrapper[4803]: I0127 22:12:38.084190 4803 generic.go:334] "Generic (PLEG): container finished" podID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerID="11912b9cb354be3b233f45b171a4908615c56d3c7b8b0f895b1b2f45c00abd55" exitCode=2 Jan 27 22:12:38 crc kubenswrapper[4803]: I0127 22:12:38.082915 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df9892cf-8ada-42c4-a4bf-b9c9416515d9","Type":"ContainerDied","Data":"6a0b0e773bdf6938a1a7cbbdeb1b3c70acd66fc3a26cdb60e805e623e2bcf614"} Jan 27 22:12:38 crc kubenswrapper[4803]: I0127 22:12:38.084461 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df9892cf-8ada-42c4-a4bf-b9c9416515d9","Type":"ContainerDied","Data":"11912b9cb354be3b233f45b171a4908615c56d3c7b8b0f895b1b2f45c00abd55"} Jan 27 22:12:38 crc kubenswrapper[4803]: I0127 22:12:38.084516 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jqw45" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="registry-server" containerID="cri-o://5bb86a95edfba57003c69da0086cf3fa56e31c9d3ac1d6d219b514f2fd1e46f6" gracePeriod=2 Jan 27 22:12:38 crc kubenswrapper[4803]: I0127 22:12:38.085424 4803 scope.go:117] "RemoveContainer" containerID="9372383e0445b4dab23e8e94b962d9e05709307e6a343060f8c2a75e8d410065" Jan 27 22:12:38 crc kubenswrapper[4803]: I0127 22:12:38.085524 4803 scope.go:117] "RemoveContainer" containerID="2d08be016329c21848ed0bc07936d9457ad377d608723a2e9a3dc95de34c840f" Jan 27 22:12:38 crc kubenswrapper[4803]: E0127 22:12:38.085799 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-68bc78f5bb-r5jpw_openstack(dd41e8ae-8eec-474a-8036-6bb7372dbd80)\"" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" Jan 27 22:12:38 crc kubenswrapper[4803]: E0127 22:12:38.086419 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-bdfb8f445-vd7f5_openstack(fa8732df-6c17-4d1f-9962-7c54b4809cb5)\"" pod="openstack/heat-api-bdfb8f445-vd7f5" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" Jan 27 22:12:38 crc kubenswrapper[4803]: I0127 22:12:38.826068 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7db646bcb9-fl7xv" podUID="197e06e5-d60b-421f-8708-a8c5b87e4bb3" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.229:8004/healthcheck\": read tcp 10.217.0.2:49560->10.217.0.229:8004: read: connection reset by peer" Jan 27 22:12:38 crc kubenswrapper[4803]: I0127 22:12:38.826831 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7db646bcb9-fl7xv" podUID="197e06e5-d60b-421f-8708-a8c5b87e4bb3" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.229:8004/healthcheck\": dial tcp 10.217.0.229:8004: connect: connection refused" Jan 27 22:12:39 crc kubenswrapper[4803]: I0127 22:12:39.100993 4803 generic.go:334] "Generic (PLEG): container finished" podID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerID="5bb86a95edfba57003c69da0086cf3fa56e31c9d3ac1d6d219b514f2fd1e46f6" exitCode=0 Jan 27 22:12:39 crc kubenswrapper[4803]: I0127 22:12:39.101060 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqw45" event={"ID":"8557daa0-d032-4ce3-845b-2ff667b49c7a","Type":"ContainerDied","Data":"5bb86a95edfba57003c69da0086cf3fa56e31c9d3ac1d6d219b514f2fd1e46f6"} Jan 27 22:12:39 crc kubenswrapper[4803]: I0127 22:12:39.105494 4803 generic.go:334] "Generic (PLEG): container finished" podID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerID="f0bf9f7050fabf76dd2daa11c5ae5f9a042d72b9923c4aefb750da28c8cd91bc" exitCode=0 Jan 27 22:12:39 crc kubenswrapper[4803]: I0127 22:12:39.105559 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df9892cf-8ada-42c4-a4bf-b9c9416515d9","Type":"ContainerDied","Data":"f0bf9f7050fabf76dd2daa11c5ae5f9a042d72b9923c4aefb750da28c8cd91bc"} Jan 27 22:12:39 crc kubenswrapper[4803]: I0127 22:12:39.107578 4803 generic.go:334] "Generic (PLEG): container finished" podID="197e06e5-d60b-421f-8708-a8c5b87e4bb3" containerID="965d72d7e4d0d24ea8c1bef43bae83c921274120e511eb0a3edddc7f9d0259f7" exitCode=0 Jan 27 22:12:39 crc kubenswrapper[4803]: I0127 22:12:39.108000 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7db646bcb9-fl7xv" event={"ID":"197e06e5-d60b-421f-8708-a8c5b87e4bb3","Type":"ContainerDied","Data":"965d72d7e4d0d24ea8c1bef43bae83c921274120e511eb0a3edddc7f9d0259f7"} Jan 27 22:12:39 crc kubenswrapper[4803]: I0127 22:12:39.108389 4803 scope.go:117] "RemoveContainer" containerID="2d08be016329c21848ed0bc07936d9457ad377d608723a2e9a3dc95de34c840f" Jan 27 22:12:39 crc kubenswrapper[4803]: E0127 22:12:39.108768 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-bdfb8f445-vd7f5_openstack(fa8732df-6c17-4d1f-9962-7c54b4809cb5)\"" pod="openstack/heat-api-bdfb8f445-vd7f5" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" Jan 27 22:12:40 crc kubenswrapper[4803]: I0127 22:12:40.876729 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" podUID="694f20c4-bc76-42b5-b458-4e56227ca03d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.227:8000/healthcheck\": read tcp 10.217.0.2:43974->10.217.0.227:8000: read: connection reset by peer" Jan 27 22:12:41 crc kubenswrapper[4803]: I0127 22:12:41.161038 4803 generic.go:334] "Generic (PLEG): container finished" podID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerID="7eeae033d86cdd944b894bd630460001698191db360ccca4f00d8f3ce90ec9a2" exitCode=0 Jan 27 22:12:41 crc kubenswrapper[4803]: I0127 22:12:41.161195 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df9892cf-8ada-42c4-a4bf-b9c9416515d9","Type":"ContainerDied","Data":"7eeae033d86cdd944b894bd630460001698191db360ccca4f00d8f3ce90ec9a2"} Jan 27 22:12:41 crc kubenswrapper[4803]: I0127 22:12:41.166038 4803 generic.go:334] "Generic (PLEG): container finished" podID="694f20c4-bc76-42b5-b458-4e56227ca03d" containerID="2d2ebc85fca8e2f904feda71e595bf4bd7df830ac7424287db94e9de50b8ceb0" exitCode=0 Jan 27 22:12:41 crc kubenswrapper[4803]: I0127 22:12:41.166090 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" event={"ID":"694f20c4-bc76-42b5-b458-4e56227ca03d","Type":"ContainerDied","Data":"2d2ebc85fca8e2f904feda71e595bf4bd7df830ac7424287db94e9de50b8ceb0"} Jan 27 22:12:41 crc kubenswrapper[4803]: I0127 22:12:41.955181 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:41 crc kubenswrapper[4803]: I0127 22:12:41.971920 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:12:41 crc kubenswrapper[4803]: I0127 22:12:41.993992 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.012231 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.038039 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-combined-ca-bundle\") pod \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.038123 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjzj4\" (UniqueName: \"kubernetes.io/projected/197e06e5-d60b-421f-8708-a8c5b87e4bb3-kube-api-access-vjzj4\") pod \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.038154 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data-custom\") pod \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.038181 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data\") pod \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.053107 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "197e06e5-d60b-421f-8708-a8c5b87e4bb3" (UID: "197e06e5-d60b-421f-8708-a8c5b87e4bb3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.053112 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/197e06e5-d60b-421f-8708-a8c5b87e4bb3-kube-api-access-vjzj4" (OuterVolumeSpecName: "kube-api-access-vjzj4") pod "197e06e5-d60b-421f-8708-a8c5b87e4bb3" (UID: "197e06e5-d60b-421f-8708-a8c5b87e4bb3"). InnerVolumeSpecName "kube-api-access-vjzj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.126495 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "197e06e5-d60b-421f-8708-a8c5b87e4bb3" (UID: "197e06e5-d60b-421f-8708-a8c5b87e4bb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.141018 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data\") pod \"694f20c4-bc76-42b5-b458-4e56227ca03d\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.141138 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data" (OuterVolumeSpecName: "config-data") pod "197e06e5-d60b-421f-8708-a8c5b87e4bb3" (UID: "197e06e5-d60b-421f-8708-a8c5b87e4bb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.141245 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-config-data\") pod \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.141347 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfmxj\" (UniqueName: \"kubernetes.io/projected/694f20c4-bc76-42b5-b458-4e56227ca03d-kube-api-access-sfmxj\") pod \"694f20c4-bc76-42b5-b458-4e56227ca03d\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.141467 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-scripts\") pod \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.141557 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data-custom\") pod \"694f20c4-bc76-42b5-b458-4e56227ca03d\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.141698 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data\") pod \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\" (UID: \"197e06e5-d60b-421f-8708-a8c5b87e4bb3\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.141815 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-utilities\") pod \"8557daa0-d032-4ce3-845b-2ff667b49c7a\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.141969 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-log-httpd\") pod \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.142092 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-run-httpd\") pod \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.142212 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-sg-core-conf-yaml\") pod \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.142341 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-combined-ca-bundle\") pod \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.142442 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-combined-ca-bundle\") pod \"694f20c4-bc76-42b5-b458-4e56227ca03d\" (UID: \"694f20c4-bc76-42b5-b458-4e56227ca03d\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.142552 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42fl5\" (UniqueName: \"kubernetes.io/projected/8557daa0-d032-4ce3-845b-2ff667b49c7a-kube-api-access-42fl5\") pod \"8557daa0-d032-4ce3-845b-2ff667b49c7a\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.142656 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c59r\" (UniqueName: \"kubernetes.io/projected/df9892cf-8ada-42c4-a4bf-b9c9416515d9-kube-api-access-2c59r\") pod \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\" (UID: \"df9892cf-8ada-42c4-a4bf-b9c9416515d9\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.142784 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-catalog-content\") pod \"8557daa0-d032-4ce3-845b-2ff667b49c7a\" (UID: \"8557daa0-d032-4ce3-845b-2ff667b49c7a\") " Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.144500 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.144834 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjzj4\" (UniqueName: \"kubernetes.io/projected/197e06e5-d60b-421f-8708-a8c5b87e4bb3-kube-api-access-vjzj4\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.144974 4803 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.145801 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "df9892cf-8ada-42c4-a4bf-b9c9416515d9" (UID: "df9892cf-8ada-42c4-a4bf-b9c9416515d9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.146240 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-utilities" (OuterVolumeSpecName: "utilities") pod "8557daa0-d032-4ce3-845b-2ff667b49c7a" (UID: "8557daa0-d032-4ce3-845b-2ff667b49c7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: W0127 22:12:42.146387 4803 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/197e06e5-d60b-421f-8708-a8c5b87e4bb3/volumes/kubernetes.io~secret/config-data Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.146455 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data" (OuterVolumeSpecName: "config-data") pod "197e06e5-d60b-421f-8708-a8c5b87e4bb3" (UID: "197e06e5-d60b-421f-8708-a8c5b87e4bb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.149686 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "df9892cf-8ada-42c4-a4bf-b9c9416515d9" (UID: "df9892cf-8ada-42c4-a4bf-b9c9416515d9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.150190 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694f20c4-bc76-42b5-b458-4e56227ca03d-kube-api-access-sfmxj" (OuterVolumeSpecName: "kube-api-access-sfmxj") pod "694f20c4-bc76-42b5-b458-4e56227ca03d" (UID: "694f20c4-bc76-42b5-b458-4e56227ca03d"). InnerVolumeSpecName "kube-api-access-sfmxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.158035 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "694f20c4-bc76-42b5-b458-4e56227ca03d" (UID: "694f20c4-bc76-42b5-b458-4e56227ca03d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.165426 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9892cf-8ada-42c4-a4bf-b9c9416515d9-kube-api-access-2c59r" (OuterVolumeSpecName: "kube-api-access-2c59r") pod "df9892cf-8ada-42c4-a4bf-b9c9416515d9" (UID: "df9892cf-8ada-42c4-a4bf-b9c9416515d9"). InnerVolumeSpecName "kube-api-access-2c59r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.165934 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-scripts" (OuterVolumeSpecName: "scripts") pod "df9892cf-8ada-42c4-a4bf-b9c9416515d9" (UID: "df9892cf-8ada-42c4-a4bf-b9c9416515d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.168699 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8557daa0-d032-4ce3-845b-2ff667b49c7a-kube-api-access-42fl5" (OuterVolumeSpecName: "kube-api-access-42fl5") pod "8557daa0-d032-4ce3-845b-2ff667b49c7a" (UID: "8557daa0-d032-4ce3-845b-2ff667b49c7a"). InnerVolumeSpecName "kube-api-access-42fl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.202384 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.202450 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df9892cf-8ada-42c4-a4bf-b9c9416515d9","Type":"ContainerDied","Data":"fe27424531d5544e91c23d73597fa87c2b471616e815a92a312f10bdc5012057"} Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.202493 4803 scope.go:117] "RemoveContainer" containerID="6a0b0e773bdf6938a1a7cbbdeb1b3c70acd66fc3a26cdb60e805e623e2bcf614" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.238001 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7db646bcb9-fl7xv" event={"ID":"197e06e5-d60b-421f-8708-a8c5b87e4bb3","Type":"ContainerDied","Data":"cfbfc884cb63baeeee4f1d3bac2cf497ddda62c3a52e77539270e952d91f50b7"} Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.238059 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "df9892cf-8ada-42c4-a4bf-b9c9416515d9" (UID: "df9892cf-8ada-42c4-a4bf-b9c9416515d9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.238090 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7db646bcb9-fl7xv" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.242630 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" event={"ID":"694f20c4-bc76-42b5-b458-4e56227ca03d","Type":"ContainerDied","Data":"a7dbba77ab26a85397b657c4ddcf0e882ffa8ac6ead7936541ab76bf055bab12"} Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.242711 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6977659f7b-ttxqx" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.248091 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.248116 4803 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.248127 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197e06e5-d60b-421f-8708-a8c5b87e4bb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.248138 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.248146 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.248155 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df9892cf-8ada-42c4-a4bf-b9c9416515d9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.248168 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.248176 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42fl5\" (UniqueName: \"kubernetes.io/projected/8557daa0-d032-4ce3-845b-2ff667b49c7a-kube-api-access-42fl5\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.248187 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c59r\" (UniqueName: \"kubernetes.io/projected/df9892cf-8ada-42c4-a4bf-b9c9416515d9-kube-api-access-2c59r\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.248197 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfmxj\" (UniqueName: \"kubernetes.io/projected/694f20c4-bc76-42b5-b458-4e56227ca03d-kube-api-access-sfmxj\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.250226 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data" (OuterVolumeSpecName: "config-data") pod "694f20c4-bc76-42b5-b458-4e56227ca03d" (UID: "694f20c4-bc76-42b5-b458-4e56227ca03d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.251117 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqw45" event={"ID":"8557daa0-d032-4ce3-845b-2ff667b49c7a","Type":"ContainerDied","Data":"47d663531a0f5ed84d62ae544e8531c99249a7efc278c7b8935c19a9a1de2a48"} Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.251173 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jqw45" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.251999 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "694f20c4-bc76-42b5-b458-4e56227ca03d" (UID: "694f20c4-bc76-42b5-b458-4e56227ca03d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.293223 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8557daa0-d032-4ce3-845b-2ff667b49c7a" (UID: "8557daa0-d032-4ce3-845b-2ff667b49c7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.335656 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df9892cf-8ada-42c4-a4bf-b9c9416515d9" (UID: "df9892cf-8ada-42c4-a4bf-b9c9416515d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.351215 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.351520 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.351858 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8557daa0-d032-4ce3-845b-2ff667b49c7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.351964 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694f20c4-bc76-42b5-b458-4e56227ca03d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.358549 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cd7d794d7-nf5gr"] Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.413542 4803 scope.go:117] "RemoveContainer" containerID="11912b9cb354be3b233f45b171a4908615c56d3c7b8b0f895b1b2f45c00abd55" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.440028 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7db646bcb9-fl7xv"] Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.444085 4803 scope.go:117] "RemoveContainer" containerID="f0bf9f7050fabf76dd2daa11c5ae5f9a042d72b9923c4aefb750da28c8cd91bc" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.449994 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-config-data" (OuterVolumeSpecName: "config-data") pod "df9892cf-8ada-42c4-a4bf-b9c9416515d9" (UID: "df9892cf-8ada-42c4-a4bf-b9c9416515d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.452293 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7db646bcb9-fl7xv"] Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.454246 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9892cf-8ada-42c4-a4bf-b9c9416515d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.474718 4803 scope.go:117] "RemoveContainer" containerID="7eeae033d86cdd944b894bd630460001698191db360ccca4f00d8f3ce90ec9a2" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.498767 4803 scope.go:117] "RemoveContainer" containerID="965d72d7e4d0d24ea8c1bef43bae83c921274120e511eb0a3edddc7f9d0259f7" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.531055 4803 scope.go:117] "RemoveContainer" containerID="2d2ebc85fca8e2f904feda71e595bf4bd7df830ac7424287db94e9de50b8ceb0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.554629 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.566022 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.570392 4803 scope.go:117] "RemoveContainer" containerID="5bb86a95edfba57003c69da0086cf3fa56e31c9d3ac1d6d219b514f2fd1e46f6" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.576583 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c55c9f8f8-s8fzg"] Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.590223 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:42 crc kubenswrapper[4803]: E0127 22:12:42.590635 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="ceilometer-central-agent" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.590652 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="ceilometer-central-agent" Jan 27 22:12:42 crc kubenswrapper[4803]: E0127 22:12:42.590672 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="extract-utilities" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.590704 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="extract-utilities" Jan 27 22:12:42 crc kubenswrapper[4803]: E0127 22:12:42.590719 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="registry-server" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.590727 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="registry-server" Jan 27 22:12:42 crc kubenswrapper[4803]: E0127 22:12:42.590745 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="proxy-httpd" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.590751 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="proxy-httpd" Jan 27 22:12:42 crc kubenswrapper[4803]: E0127 22:12:42.590760 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="sg-core" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.590766 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="sg-core" Jan 27 22:12:42 crc kubenswrapper[4803]: E0127 22:12:42.590780 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197e06e5-d60b-421f-8708-a8c5b87e4bb3" containerName="heat-api" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.590786 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="197e06e5-d60b-421f-8708-a8c5b87e4bb3" containerName="heat-api" Jan 27 22:12:42 crc kubenswrapper[4803]: E0127 22:12:42.590804 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="extract-content" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.590810 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="extract-content" Jan 27 22:12:42 crc kubenswrapper[4803]: E0127 22:12:42.590819 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694f20c4-bc76-42b5-b458-4e56227ca03d" containerName="heat-cfnapi" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.590825 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="694f20c4-bc76-42b5-b458-4e56227ca03d" containerName="heat-cfnapi" Jan 27 22:12:42 crc kubenswrapper[4803]: E0127 22:12:42.590844 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="ceilometer-notification-agent" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.590866 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="ceilometer-notification-agent" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.591070 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="197e06e5-d60b-421f-8708-a8c5b87e4bb3" containerName="heat-api" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.591081 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" containerName="registry-server" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.591095 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="ceilometer-central-agent" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.591108 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="694f20c4-bc76-42b5-b458-4e56227ca03d" containerName="heat-cfnapi" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.591114 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="proxy-httpd" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.591125 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="sg-core" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.591135 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" containerName="ceilometer-notification-agent" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.593507 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.596979 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.598014 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.605463 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6977659f7b-ttxqx"] Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.623660 4803 scope.go:117] "RemoveContainer" containerID="4cb4af0f7644e519d14707c2a04583f119460bee19bc289a4e25cded524d7e4d" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.660918 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6977659f7b-ttxqx"] Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.668005 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-config-data\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.668060 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.668115 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-log-httpd\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.668236 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-run-httpd\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.668456 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmsdf\" (UniqueName: \"kubernetes.io/projected/de8b9385-8326-4cf2-ab68-90ce9a2d9608-kube-api-access-lmsdf\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.668505 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.668628 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-scripts\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.673640 4803 scope.go:117] "RemoveContainer" containerID="14b1690e6be58945815d2b583a94eb3e93557c1bd32cc470a5a63069f162fd95" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.707928 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.723094 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jqw45"] Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.747019 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jqw45"] Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.771176 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-config-data\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.771218 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.771248 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-log-httpd\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.771292 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-run-httpd\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.771327 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmsdf\" (UniqueName: \"kubernetes.io/projected/de8b9385-8326-4cf2-ab68-90ce9a2d9608-kube-api-access-lmsdf\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.771345 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.771383 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-scripts\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.772083 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-run-httpd\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.772358 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-log-httpd\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.774745 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-scripts\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.775497 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-config-data\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.777429 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.777804 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.792433 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmsdf\" (UniqueName: \"kubernetes.io/projected/de8b9385-8326-4cf2-ab68-90ce9a2d9608-kube-api-access-lmsdf\") pod \"ceilometer-0\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " pod="openstack/ceilometer-0" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.938152 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:12:42 crc kubenswrapper[4803]: I0127 22:12:42.956397 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:43 crc kubenswrapper[4803]: I0127 22:12:43.265006 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-klms9" event={"ID":"a00ff690-b44a-4a6e-9bf3-560344feda39","Type":"ContainerStarted","Data":"dce989b27f022c765459e624f6cc7762dc4bfc64a9afc7c86bbcf98625aae767"} Jan 27 22:12:43 crc kubenswrapper[4803]: I0127 22:12:43.279218 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" event={"ID":"6211e4d6-a2aa-4243-9951-906324729104","Type":"ContainerStarted","Data":"572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59"} Jan 27 22:12:43 crc kubenswrapper[4803]: I0127 22:12:43.279262 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" event={"ID":"6211e4d6-a2aa-4243-9951-906324729104","Type":"ContainerStarted","Data":"e6e838de4990a15f74a99f0fb31cc0200f03813a8ea73bd185f7213d365eed98"} Jan 27 22:12:43 crc kubenswrapper[4803]: I0127 22:12:43.281255 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:43 crc kubenswrapper[4803]: I0127 22:12:43.290730 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cd7d794d7-nf5gr" event={"ID":"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6","Type":"ContainerStarted","Data":"f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e"} Jan 27 22:12:43 crc kubenswrapper[4803]: I0127 22:12:43.290803 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cd7d794d7-nf5gr" event={"ID":"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6","Type":"ContainerStarted","Data":"e30b2d819b4209053226320edae6fb50f39c26950174d441bc2c828571cd633d"} Jan 27 22:12:43 crc kubenswrapper[4803]: I0127 22:12:43.292389 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:43 crc kubenswrapper[4803]: I0127 22:12:43.304284 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-klms9" podStartSLOduration=2.794578584 podStartE2EDuration="19.304261833s" podCreationTimestamp="2026-01-27 22:12:24 +0000 UTC" firstStartedPulling="2026-01-27 22:12:25.411027956 +0000 UTC m=+1497.827049655" lastFinishedPulling="2026-01-27 22:12:41.920711205 +0000 UTC m=+1514.336732904" observedRunningTime="2026-01-27 22:12:43.282277771 +0000 UTC m=+1515.698299480" watchObservedRunningTime="2026-01-27 22:12:43.304261833 +0000 UTC m=+1515.720283552" Jan 27 22:12:43 crc kubenswrapper[4803]: I0127 22:12:43.309700 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" podStartSLOduration=8.309685379 podStartE2EDuration="8.309685379s" podCreationTimestamp="2026-01-27 22:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:12:43.300398698 +0000 UTC m=+1515.716420397" watchObservedRunningTime="2026-01-27 22:12:43.309685379 +0000 UTC m=+1515.725707078" Jan 27 22:12:43 crc kubenswrapper[4803]: I0127 22:12:43.334325 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6cd7d794d7-nf5gr" podStartSLOduration=8.334305551 podStartE2EDuration="8.334305551s" podCreationTimestamp="2026-01-27 22:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:12:43.320225002 +0000 UTC m=+1515.736246711" watchObservedRunningTime="2026-01-27 22:12:43.334305551 +0000 UTC m=+1515.750327250" Jan 27 22:12:43 crc kubenswrapper[4803]: I0127 22:12:43.487815 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:44 crc kubenswrapper[4803]: I0127 22:12:44.323962 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="197e06e5-d60b-421f-8708-a8c5b87e4bb3" path="/var/lib/kubelet/pods/197e06e5-d60b-421f-8708-a8c5b87e4bb3/volumes" Jan 27 22:12:44 crc kubenswrapper[4803]: I0127 22:12:44.324973 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694f20c4-bc76-42b5-b458-4e56227ca03d" path="/var/lib/kubelet/pods/694f20c4-bc76-42b5-b458-4e56227ca03d/volumes" Jan 27 22:12:44 crc kubenswrapper[4803]: I0127 22:12:44.325552 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8557daa0-d032-4ce3-845b-2ff667b49c7a" path="/var/lib/kubelet/pods/8557daa0-d032-4ce3-845b-2ff667b49c7a/volumes" Jan 27 22:12:44 crc kubenswrapper[4803]: I0127 22:12:44.326728 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df9892cf-8ada-42c4-a4bf-b9c9416515d9" path="/var/lib/kubelet/pods/df9892cf-8ada-42c4-a4bf-b9c9416515d9/volumes" Jan 27 22:12:44 crc kubenswrapper[4803]: I0127 22:12:44.338006 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de8b9385-8326-4cf2-ab68-90ce9a2d9608","Type":"ContainerStarted","Data":"69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4"} Jan 27 22:12:44 crc kubenswrapper[4803]: I0127 22:12:44.338041 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de8b9385-8326-4cf2-ab68-90ce9a2d9608","Type":"ContainerStarted","Data":"d1ec6b90f114ced48798dc3d125fad96e752cb1439f76e51bf259971e07355f3"} Jan 27 22:12:45 crc kubenswrapper[4803]: I0127 22:12:45.349906 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de8b9385-8326-4cf2-ab68-90ce9a2d9608","Type":"ContainerStarted","Data":"6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5"} Jan 27 22:12:45 crc kubenswrapper[4803]: I0127 22:12:45.966770 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:46 crc kubenswrapper[4803]: I0127 22:12:46.343755 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:12:46 crc kubenswrapper[4803]: I0127 22:12:46.344057 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:12:46 crc kubenswrapper[4803]: I0127 22:12:46.344095 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:12:46 crc kubenswrapper[4803]: I0127 22:12:46.344791 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:12:46 crc kubenswrapper[4803]: I0127 22:12:46.344845 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" gracePeriod=600 Jan 27 22:12:46 crc kubenswrapper[4803]: I0127 22:12:46.362966 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de8b9385-8326-4cf2-ab68-90ce9a2d9608","Type":"ContainerStarted","Data":"233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b"} Jan 27 22:12:46 crc kubenswrapper[4803]: E0127 22:12:46.472028 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:12:47 crc kubenswrapper[4803]: I0127 22:12:47.385512 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" exitCode=0 Jan 27 22:12:47 crc kubenswrapper[4803]: I0127 22:12:47.385621 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c"} Jan 27 22:12:47 crc kubenswrapper[4803]: I0127 22:12:47.385957 4803 scope.go:117] "RemoveContainer" containerID="44535dae9f522c885b28c5811071a2781a43938af387dee7b52c5fee20b7bdeb" Jan 27 22:12:47 crc kubenswrapper[4803]: I0127 22:12:47.386785 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:12:47 crc kubenswrapper[4803]: E0127 22:12:47.387258 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:12:47 crc kubenswrapper[4803]: I0127 22:12:47.392943 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de8b9385-8326-4cf2-ab68-90ce9a2d9608","Type":"ContainerStarted","Data":"e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253"} Jan 27 22:12:47 crc kubenswrapper[4803]: I0127 22:12:47.393048 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="ceilometer-central-agent" containerID="cri-o://69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4" gracePeriod=30 Jan 27 22:12:47 crc kubenswrapper[4803]: I0127 22:12:47.393114 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 22:12:47 crc kubenswrapper[4803]: I0127 22:12:47.393126 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="sg-core" containerID="cri-o://233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b" gracePeriod=30 Jan 27 22:12:47 crc kubenswrapper[4803]: I0127 22:12:47.393115 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="proxy-httpd" containerID="cri-o://e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253" gracePeriod=30 Jan 27 22:12:47 crc kubenswrapper[4803]: I0127 22:12:47.393180 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="ceilometer-notification-agent" containerID="cri-o://6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5" gracePeriod=30 Jan 27 22:12:47 crc kubenswrapper[4803]: I0127 22:12:47.442974 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.378651353 podStartE2EDuration="5.442956033s" podCreationTimestamp="2026-01-27 22:12:42 +0000 UTC" firstStartedPulling="2026-01-27 22:12:43.487019279 +0000 UTC m=+1515.903040978" lastFinishedPulling="2026-01-27 22:12:46.551323959 +0000 UTC m=+1518.967345658" observedRunningTime="2026-01-27 22:12:47.433089508 +0000 UTC m=+1519.849111207" watchObservedRunningTime="2026-01-27 22:12:47.442956033 +0000 UTC m=+1519.858977732" Jan 27 22:12:48 crc kubenswrapper[4803]: I0127 22:12:48.405245 4803 generic.go:334] "Generic (PLEG): container finished" podID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerID="e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253" exitCode=0 Jan 27 22:12:48 crc kubenswrapper[4803]: I0127 22:12:48.405547 4803 generic.go:334] "Generic (PLEG): container finished" podID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerID="233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b" exitCode=2 Jan 27 22:12:48 crc kubenswrapper[4803]: I0127 22:12:48.405558 4803 generic.go:334] "Generic (PLEG): container finished" podID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerID="6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5" exitCode=0 Jan 27 22:12:48 crc kubenswrapper[4803]: I0127 22:12:48.405323 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de8b9385-8326-4cf2-ab68-90ce9a2d9608","Type":"ContainerDied","Data":"e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253"} Jan 27 22:12:48 crc kubenswrapper[4803]: I0127 22:12:48.405618 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de8b9385-8326-4cf2-ab68-90ce9a2d9608","Type":"ContainerDied","Data":"233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b"} Jan 27 22:12:48 crc kubenswrapper[4803]: I0127 22:12:48.405631 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de8b9385-8326-4cf2-ab68-90ce9a2d9608","Type":"ContainerDied","Data":"6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5"} Jan 27 22:12:52 crc kubenswrapper[4803]: I0127 22:12:52.248996 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:12:52 crc kubenswrapper[4803]: I0127 22:12:52.300219 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6cb97b886d-8vwwj"] Jan 27 22:12:52 crc kubenswrapper[4803]: I0127 22:12:52.300540 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6cb97b886d-8vwwj" podUID="6c26f757-3e53-46c6-be8c-4a052b5f86e2" containerName="heat-engine" containerID="cri-o://640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d" gracePeriod=60 Jan 27 22:12:52 crc kubenswrapper[4803]: I0127 22:12:52.308036 4803 scope.go:117] "RemoveContainer" containerID="9372383e0445b4dab23e8e94b962d9e05709307e6a343060f8c2a75e8d410065" Jan 27 22:12:52 crc kubenswrapper[4803]: I0127 22:12:52.371225 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:12:52 crc kubenswrapper[4803]: I0127 22:12:52.457172 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-bdfb8f445-vd7f5"] Jan 27 22:12:52 crc kubenswrapper[4803]: I0127 22:12:52.606457 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:12:52 crc kubenswrapper[4803]: I0127 22:12:52.693887 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-68bc78f5bb-r5jpw"] Jan 27 22:12:52 crc kubenswrapper[4803]: E0127 22:12:52.900945 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:12:52 crc kubenswrapper[4803]: E0127 22:12:52.913968 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:12:52 crc kubenswrapper[4803]: E0127 22:12:52.918173 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:12:52 crc kubenswrapper[4803]: E0127 22:12:52.918282 4803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6cb97b886d-8vwwj" podUID="6c26f757-3e53-46c6-be8c-4a052b5f86e2" containerName="heat-engine" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.022662 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.119415 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrp5f\" (UniqueName: \"kubernetes.io/projected/fa8732df-6c17-4d1f-9962-7c54b4809cb5-kube-api-access-qrp5f\") pod \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.119576 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data\") pod \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.119768 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-combined-ca-bundle\") pod \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.119786 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data-custom\") pod \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\" (UID: \"fa8732df-6c17-4d1f-9962-7c54b4809cb5\") " Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.138014 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fa8732df-6c17-4d1f-9962-7c54b4809cb5" (UID: "fa8732df-6c17-4d1f-9962-7c54b4809cb5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.148057 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8732df-6c17-4d1f-9962-7c54b4809cb5-kube-api-access-qrp5f" (OuterVolumeSpecName: "kube-api-access-qrp5f") pod "fa8732df-6c17-4d1f-9962-7c54b4809cb5" (UID: "fa8732df-6c17-4d1f-9962-7c54b4809cb5"). InnerVolumeSpecName "kube-api-access-qrp5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.170026 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa8732df-6c17-4d1f-9962-7c54b4809cb5" (UID: "fa8732df-6c17-4d1f-9962-7c54b4809cb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.217610 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data" (OuterVolumeSpecName: "config-data") pod "fa8732df-6c17-4d1f-9962-7c54b4809cb5" (UID: "fa8732df-6c17-4d1f-9962-7c54b4809cb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.222239 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.222281 4803 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.222293 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrp5f\" (UniqueName: \"kubernetes.io/projected/fa8732df-6c17-4d1f-9962-7c54b4809cb5-kube-api-access-qrp5f\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.222308 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8732df-6c17-4d1f-9962-7c54b4809cb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.485707 4803 generic.go:334] "Generic (PLEG): container finished" podID="a00ff690-b44a-4a6e-9bf3-560344feda39" containerID="dce989b27f022c765459e624f6cc7762dc4bfc64a9afc7c86bbcf98625aae767" exitCode=0 Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.485769 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-klms9" event={"ID":"a00ff690-b44a-4a6e-9bf3-560344feda39","Type":"ContainerDied","Data":"dce989b27f022c765459e624f6cc7762dc4bfc64a9afc7c86bbcf98625aae767"} Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.487925 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bdfb8f445-vd7f5" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.488012 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bdfb8f445-vd7f5" event={"ID":"fa8732df-6c17-4d1f-9962-7c54b4809cb5","Type":"ContainerDied","Data":"d5bdedb3eecdd0fe561f8372100a0d0132064863adde617b80b15c2f9135e80a"} Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.488067 4803 scope.go:117] "RemoveContainer" containerID="2d08be016329c21848ed0bc07936d9457ad377d608723a2e9a3dc95de34c840f" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.491186 4803 generic.go:334] "Generic (PLEG): container finished" podID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerID="0b20f9a347459a9567c9be0d1a72df889a2ce8ff2c6d9a9063d0660c03fb94a6" exitCode=1 Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.491235 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" event={"ID":"dd41e8ae-8eec-474a-8036-6bb7372dbd80","Type":"ContainerDied","Data":"0b20f9a347459a9567c9be0d1a72df889a2ce8ff2c6d9a9063d0660c03fb94a6"} Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.524295 4803 scope.go:117] "RemoveContainer" containerID="9372383e0445b4dab23e8e94b962d9e05709307e6a343060f8c2a75e8d410065" Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.645988 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-bdfb8f445-vd7f5"] Jan 27 22:12:53 crc kubenswrapper[4803]: I0127 22:12:53.656022 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-bdfb8f445-vd7f5"] Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.114779 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.264457 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data-custom\") pod \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.264659 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9knv\" (UniqueName: \"kubernetes.io/projected/dd41e8ae-8eec-474a-8036-6bb7372dbd80-kube-api-access-v9knv\") pod \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.264831 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-combined-ca-bundle\") pod \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.264864 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data\") pod \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\" (UID: \"dd41e8ae-8eec-474a-8036-6bb7372dbd80\") " Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.272563 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd41e8ae-8eec-474a-8036-6bb7372dbd80-kube-api-access-v9knv" (OuterVolumeSpecName: "kube-api-access-v9knv") pod "dd41e8ae-8eec-474a-8036-6bb7372dbd80" (UID: "dd41e8ae-8eec-474a-8036-6bb7372dbd80"). InnerVolumeSpecName "kube-api-access-v9knv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.272646 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dd41e8ae-8eec-474a-8036-6bb7372dbd80" (UID: "dd41e8ae-8eec-474a-8036-6bb7372dbd80"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.307225 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd41e8ae-8eec-474a-8036-6bb7372dbd80" (UID: "dd41e8ae-8eec-474a-8036-6bb7372dbd80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.326989 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" path="/var/lib/kubelet/pods/fa8732df-6c17-4d1f-9962-7c54b4809cb5/volumes" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.350614 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data" (OuterVolumeSpecName: "config-data") pod "dd41e8ae-8eec-474a-8036-6bb7372dbd80" (UID: "dd41e8ae-8eec-474a-8036-6bb7372dbd80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.369835 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9knv\" (UniqueName: \"kubernetes.io/projected/dd41e8ae-8eec-474a-8036-6bb7372dbd80-kube-api-access-v9knv\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.369875 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.369885 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.369894 4803 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd41e8ae-8eec-474a-8036-6bb7372dbd80-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.503777 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" event={"ID":"dd41e8ae-8eec-474a-8036-6bb7372dbd80","Type":"ContainerDied","Data":"3f2c5fd3bfd6d060e4e166a5c21bba28299e2d34b36725718bf627519055f04a"} Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.503831 4803 scope.go:117] "RemoveContainer" containerID="0b20f9a347459a9567c9be0d1a72df889a2ce8ff2c6d9a9063d0660c03fb94a6" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.503909 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68bc78f5bb-r5jpw" Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.537991 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-68bc78f5bb-r5jpw"] Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.547655 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-68bc78f5bb-r5jpw"] Jan 27 22:12:54 crc kubenswrapper[4803]: I0127 22:12:54.941027 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.087812 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wnlq\" (UniqueName: \"kubernetes.io/projected/a00ff690-b44a-4a6e-9bf3-560344feda39-kube-api-access-8wnlq\") pod \"a00ff690-b44a-4a6e-9bf3-560344feda39\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.088286 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-combined-ca-bundle\") pod \"a00ff690-b44a-4a6e-9bf3-560344feda39\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.088407 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-config-data\") pod \"a00ff690-b44a-4a6e-9bf3-560344feda39\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.088462 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-scripts\") pod \"a00ff690-b44a-4a6e-9bf3-560344feda39\" (UID: \"a00ff690-b44a-4a6e-9bf3-560344feda39\") " Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.095883 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00ff690-b44a-4a6e-9bf3-560344feda39-kube-api-access-8wnlq" (OuterVolumeSpecName: "kube-api-access-8wnlq") pod "a00ff690-b44a-4a6e-9bf3-560344feda39" (UID: "a00ff690-b44a-4a6e-9bf3-560344feda39"). InnerVolumeSpecName "kube-api-access-8wnlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.096813 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-scripts" (OuterVolumeSpecName: "scripts") pod "a00ff690-b44a-4a6e-9bf3-560344feda39" (UID: "a00ff690-b44a-4a6e-9bf3-560344feda39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.137027 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a00ff690-b44a-4a6e-9bf3-560344feda39" (UID: "a00ff690-b44a-4a6e-9bf3-560344feda39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.141633 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-config-data" (OuterVolumeSpecName: "config-data") pod "a00ff690-b44a-4a6e-9bf3-560344feda39" (UID: "a00ff690-b44a-4a6e-9bf3-560344feda39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.191754 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wnlq\" (UniqueName: \"kubernetes.io/projected/a00ff690-b44a-4a6e-9bf3-560344feda39-kube-api-access-8wnlq\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.191800 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.191813 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.191824 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00ff690-b44a-4a6e-9bf3-560344feda39-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.518059 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-klms9" event={"ID":"a00ff690-b44a-4a6e-9bf3-560344feda39","Type":"ContainerDied","Data":"7b67a17a19dbaeb7634d2d13ececdb992ab4fa60fa495282f6a95cf9cb041c9a"} Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.518104 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b67a17a19dbaeb7634d2d13ececdb992ab4fa60fa495282f6a95cf9cb041c9a" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.518140 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-klms9" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.629182 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 22:12:55 crc kubenswrapper[4803]: E0127 22:12:55.629730 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerName="heat-cfnapi" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.629748 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerName="heat-cfnapi" Jan 27 22:12:55 crc kubenswrapper[4803]: E0127 22:12:55.629761 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" containerName="heat-api" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.629768 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" containerName="heat-api" Jan 27 22:12:55 crc kubenswrapper[4803]: E0127 22:12:55.629779 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerName="heat-cfnapi" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.629785 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerName="heat-cfnapi" Jan 27 22:12:55 crc kubenswrapper[4803]: E0127 22:12:55.629805 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" containerName="heat-api" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.629818 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" containerName="heat-api" Jan 27 22:12:55 crc kubenswrapper[4803]: E0127 22:12:55.629860 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00ff690-b44a-4a6e-9bf3-560344feda39" containerName="nova-cell0-conductor-db-sync" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.629866 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00ff690-b44a-4a6e-9bf3-560344feda39" containerName="nova-cell0-conductor-db-sync" Jan 27 22:12:55 crc kubenswrapper[4803]: E0127 22:12:55.629886 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerName="heat-cfnapi" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.629892 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerName="heat-cfnapi" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.630089 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" containerName="heat-api" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.630106 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerName="heat-cfnapi" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.630120 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerName="heat-cfnapi" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.630148 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00ff690-b44a-4a6e-9bf3-560344feda39" containerName="nova-cell0-conductor-db-sync" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.630974 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.635462 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pmtb4" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.636051 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.647728 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.704458 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c23100-792f-4ce4-9c03-55ffb04e5538-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"73c23100-792f-4ce4-9c03-55ffb04e5538\") " pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.704503 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnkwd\" (UniqueName: \"kubernetes.io/projected/73c23100-792f-4ce4-9c03-55ffb04e5538-kube-api-access-cnkwd\") pod \"nova-cell0-conductor-0\" (UID: \"73c23100-792f-4ce4-9c03-55ffb04e5538\") " pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.704671 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c23100-792f-4ce4-9c03-55ffb04e5538-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"73c23100-792f-4ce4-9c03-55ffb04e5538\") " pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.806109 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c23100-792f-4ce4-9c03-55ffb04e5538-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"73c23100-792f-4ce4-9c03-55ffb04e5538\") " pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.806453 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c23100-792f-4ce4-9c03-55ffb04e5538-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"73c23100-792f-4ce4-9c03-55ffb04e5538\") " pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.806541 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnkwd\" (UniqueName: \"kubernetes.io/projected/73c23100-792f-4ce4-9c03-55ffb04e5538-kube-api-access-cnkwd\") pod \"nova-cell0-conductor-0\" (UID: \"73c23100-792f-4ce4-9c03-55ffb04e5538\") " pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.810286 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c23100-792f-4ce4-9c03-55ffb04e5538-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"73c23100-792f-4ce4-9c03-55ffb04e5538\") " pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.825226 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c23100-792f-4ce4-9c03-55ffb04e5538-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"73c23100-792f-4ce4-9c03-55ffb04e5538\") " pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.825311 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnkwd\" (UniqueName: \"kubernetes.io/projected/73c23100-792f-4ce4-9c03-55ffb04e5538-kube-api-access-cnkwd\") pod \"nova-cell0-conductor-0\" (UID: \"73c23100-792f-4ce4-9c03-55ffb04e5538\") " pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:55 crc kubenswrapper[4803]: I0127 22:12:55.955646 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:56 crc kubenswrapper[4803]: I0127 22:12:56.324085 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" path="/var/lib/kubelet/pods/dd41e8ae-8eec-474a-8036-6bb7372dbd80/volumes" Jan 27 22:12:56 crc kubenswrapper[4803]: I0127 22:12:56.471419 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 22:12:56 crc kubenswrapper[4803]: I0127 22:12:56.566775 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"73c23100-792f-4ce4-9c03-55ffb04e5538","Type":"ContainerStarted","Data":"1ac1023fcc5f281aa7489eb6970150bee8796b378313176be4d375e5ae53e6cb"} Jan 27 22:12:57 crc kubenswrapper[4803]: I0127 22:12:57.588926 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"73c23100-792f-4ce4-9c03-55ffb04e5538","Type":"ContainerStarted","Data":"cf891c487fdf8e0f1dc47fdddf337dcf2c13738cc00baf241e0887aab37b9fa8"} Jan 27 22:12:57 crc kubenswrapper[4803]: I0127 22:12:57.589525 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 22:12:57 crc kubenswrapper[4803]: I0127 22:12:57.629729 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.629710716 podStartE2EDuration="2.629710716s" podCreationTimestamp="2026-01-27 22:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:12:57.621840765 +0000 UTC m=+1530.037862464" watchObservedRunningTime="2026-01-27 22:12:57.629710716 +0000 UTC m=+1530.045732415" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.521837 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.576595 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmsdf\" (UniqueName: \"kubernetes.io/projected/de8b9385-8326-4cf2-ab68-90ce9a2d9608-kube-api-access-lmsdf\") pod \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.576793 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-run-httpd\") pod \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.576868 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-combined-ca-bundle\") pod \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.576906 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-sg-core-conf-yaml\") pod \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.576944 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-scripts\") pod \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.577055 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-config-data\") pod \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.577117 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-log-httpd\") pod \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\" (UID: \"de8b9385-8326-4cf2-ab68-90ce9a2d9608\") " Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.578212 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "de8b9385-8326-4cf2-ab68-90ce9a2d9608" (UID: "de8b9385-8326-4cf2-ab68-90ce9a2d9608"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.578288 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "de8b9385-8326-4cf2-ab68-90ce9a2d9608" (UID: "de8b9385-8326-4cf2-ab68-90ce9a2d9608"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.595197 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-scripts" (OuterVolumeSpecName: "scripts") pod "de8b9385-8326-4cf2-ab68-90ce9a2d9608" (UID: "de8b9385-8326-4cf2-ab68-90ce9a2d9608"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.609279 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de8b9385-8326-4cf2-ab68-90ce9a2d9608-kube-api-access-lmsdf" (OuterVolumeSpecName: "kube-api-access-lmsdf") pod "de8b9385-8326-4cf2-ab68-90ce9a2d9608" (UID: "de8b9385-8326-4cf2-ab68-90ce9a2d9608"). InnerVolumeSpecName "kube-api-access-lmsdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.695384 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "de8b9385-8326-4cf2-ab68-90ce9a2d9608" (UID: "de8b9385-8326-4cf2-ab68-90ce9a2d9608"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.696496 4803 generic.go:334] "Generic (PLEG): container finished" podID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerID="69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4" exitCode=0 Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.697004 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.698024 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de8b9385-8326-4cf2-ab68-90ce9a2d9608","Type":"ContainerDied","Data":"69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4"} Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.698091 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de8b9385-8326-4cf2-ab68-90ce9a2d9608","Type":"ContainerDied","Data":"d1ec6b90f114ced48798dc3d125fad96e752cb1439f76e51bf259971e07355f3"} Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.698109 4803 scope.go:117] "RemoveContainer" containerID="e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.715397 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmsdf\" (UniqueName: \"kubernetes.io/projected/de8b9385-8326-4cf2-ab68-90ce9a2d9608-kube-api-access-lmsdf\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.715431 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.715450 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.715460 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.715469 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de8b9385-8326-4cf2-ab68-90ce9a2d9608-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.808631 4803 scope.go:117] "RemoveContainer" containerID="233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.878292 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de8b9385-8326-4cf2-ab68-90ce9a2d9608" (UID: "de8b9385-8326-4cf2-ab68-90ce9a2d9608"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.895333 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-config-data" (OuterVolumeSpecName: "config-data") pod "de8b9385-8326-4cf2-ab68-90ce9a2d9608" (UID: "de8b9385-8326-4cf2-ab68-90ce9a2d9608"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.900403 4803 scope.go:117] "RemoveContainer" containerID="6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.920952 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.920976 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de8b9385-8326-4cf2-ab68-90ce9a2d9608-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.939611 4803 scope.go:117] "RemoveContainer" containerID="69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.974699 4803 scope.go:117] "RemoveContainer" containerID="e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253" Jan 27 22:12:58 crc kubenswrapper[4803]: E0127 22:12:58.975231 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253\": container with ID starting with e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253 not found: ID does not exist" containerID="e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.975265 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253"} err="failed to get container status \"e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253\": rpc error: code = NotFound desc = could not find container \"e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253\": container with ID starting with e83dfa637a2bd72ca82c7b91e6af1a4efb9b0f9007197976757d5e40f4dc4253 not found: ID does not exist" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.975285 4803 scope.go:117] "RemoveContainer" containerID="233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b" Jan 27 22:12:58 crc kubenswrapper[4803]: E0127 22:12:58.978683 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b\": container with ID starting with 233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b not found: ID does not exist" containerID="233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.978710 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b"} err="failed to get container status \"233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b\": rpc error: code = NotFound desc = could not find container \"233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b\": container with ID starting with 233432ca97298631c297d16e653490015c85686c53f111d1439926392b37fb8b not found: ID does not exist" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.978723 4803 scope.go:117] "RemoveContainer" containerID="6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5" Jan 27 22:12:58 crc kubenswrapper[4803]: E0127 22:12:58.979076 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5\": container with ID starting with 6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5 not found: ID does not exist" containerID="6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.979098 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5"} err="failed to get container status \"6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5\": rpc error: code = NotFound desc = could not find container \"6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5\": container with ID starting with 6575051532de0567ebdae38798785efc43d22349ab321b654f3a6815004254f5 not found: ID does not exist" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.979110 4803 scope.go:117] "RemoveContainer" containerID="69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4" Jan 27 22:12:58 crc kubenswrapper[4803]: E0127 22:12:58.979331 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4\": container with ID starting with 69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4 not found: ID does not exist" containerID="69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4" Jan 27 22:12:58 crc kubenswrapper[4803]: I0127 22:12:58.979355 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4"} err="failed to get container status \"69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4\": rpc error: code = NotFound desc = could not find container \"69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4\": container with ID starting with 69f70ca8163d3b00ebe80b92efcd2d375cbd45c2f901c696c6186f65d3d5b0f4 not found: ID does not exist" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.045610 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.063863 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.074698 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:59 crc kubenswrapper[4803]: E0127 22:12:59.075175 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="proxy-httpd" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.075209 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="proxy-httpd" Jan 27 22:12:59 crc kubenswrapper[4803]: E0127 22:12:59.075233 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="sg-core" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.075239 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="sg-core" Jan 27 22:12:59 crc kubenswrapper[4803]: E0127 22:12:59.075266 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="ceilometer-notification-agent" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.075274 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="ceilometer-notification-agent" Jan 27 22:12:59 crc kubenswrapper[4803]: E0127 22:12:59.075288 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="ceilometer-central-agent" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.075294 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="ceilometer-central-agent" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.075549 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8732df-6c17-4d1f-9962-7c54b4809cb5" containerName="heat-api" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.075584 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd41e8ae-8eec-474a-8036-6bb7372dbd80" containerName="heat-cfnapi" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.075599 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="sg-core" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.075607 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="proxy-httpd" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.075619 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="ceilometer-central-agent" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.075629 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" containerName="ceilometer-notification-agent" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.084903 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.087973 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.089673 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.092623 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.226578 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.226711 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-run-httpd\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.226762 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-scripts\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.226789 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-config-data\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.226905 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-log-httpd\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.226945 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r72j\" (UniqueName: \"kubernetes.io/projected/82091755-eb7c-4c14-b262-0d7102b6799c-kube-api-access-9r72j\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.227043 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.329621 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-log-httpd\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.329706 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r72j\" (UniqueName: \"kubernetes.io/projected/82091755-eb7c-4c14-b262-0d7102b6799c-kube-api-access-9r72j\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.329775 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.329814 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.329924 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-run-httpd\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.330008 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-scripts\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.330059 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-config-data\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.330982 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-log-httpd\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.331934 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-run-httpd\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.334394 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.334629 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-scripts\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.334821 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.336836 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-config-data\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.354317 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r72j\" (UniqueName: \"kubernetes.io/projected/82091755-eb7c-4c14-b262-0d7102b6799c-kube-api-access-9r72j\") pod \"ceilometer-0\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.415159 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:12:59 crc kubenswrapper[4803]: I0127 22:12:59.952650 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:12:59 crc kubenswrapper[4803]: W0127 22:12:59.975250 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82091755_eb7c_4c14_b262_0d7102b6799c.slice/crio-e2ccb9066c5732f2218cecf26f3c16b172adafdc0bb273cf9e1c569db56fb7f5 WatchSource:0}: Error finding container e2ccb9066c5732f2218cecf26f3c16b172adafdc0bb273cf9e1c569db56fb7f5: Status 404 returned error can't find the container with id e2ccb9066c5732f2218cecf26f3c16b172adafdc0bb273cf9e1c569db56fb7f5 Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.322584 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de8b9385-8326-4cf2-ab68-90ce9a2d9608" path="/var/lib/kubelet/pods/de8b9385-8326-4cf2-ab68-90ce9a2d9608/volumes" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.369943 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.455289 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data-custom\") pod \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.455791 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28qn7\" (UniqueName: \"kubernetes.io/projected/6c26f757-3e53-46c6-be8c-4a052b5f86e2-kube-api-access-28qn7\") pod \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.456119 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data\") pod \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.456191 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-combined-ca-bundle\") pod \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\" (UID: \"6c26f757-3e53-46c6-be8c-4a052b5f86e2\") " Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.463530 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c26f757-3e53-46c6-be8c-4a052b5f86e2" (UID: "6c26f757-3e53-46c6-be8c-4a052b5f86e2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.464732 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c26f757-3e53-46c6-be8c-4a052b5f86e2-kube-api-access-28qn7" (OuterVolumeSpecName: "kube-api-access-28qn7") pod "6c26f757-3e53-46c6-be8c-4a052b5f86e2" (UID: "6c26f757-3e53-46c6-be8c-4a052b5f86e2"). InnerVolumeSpecName "kube-api-access-28qn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.490585 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c26f757-3e53-46c6-be8c-4a052b5f86e2" (UID: "6c26f757-3e53-46c6-be8c-4a052b5f86e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.532403 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data" (OuterVolumeSpecName: "config-data") pod "6c26f757-3e53-46c6-be8c-4a052b5f86e2" (UID: "6c26f757-3e53-46c6-be8c-4a052b5f86e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.559577 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.559611 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.559625 4803 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c26f757-3e53-46c6-be8c-4a052b5f86e2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.559635 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28qn7\" (UniqueName: \"kubernetes.io/projected/6c26f757-3e53-46c6-be8c-4a052b5f86e2-kube-api-access-28qn7\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.730253 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82091755-eb7c-4c14-b262-0d7102b6799c","Type":"ContainerStarted","Data":"e2ccb9066c5732f2218cecf26f3c16b172adafdc0bb273cf9e1c569db56fb7f5"} Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.732300 4803 generic.go:334] "Generic (PLEG): container finished" podID="6c26f757-3e53-46c6-be8c-4a052b5f86e2" containerID="640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d" exitCode=0 Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.732328 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cb97b886d-8vwwj" event={"ID":"6c26f757-3e53-46c6-be8c-4a052b5f86e2","Type":"ContainerDied","Data":"640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d"} Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.732345 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cb97b886d-8vwwj" event={"ID":"6c26f757-3e53-46c6-be8c-4a052b5f86e2","Type":"ContainerDied","Data":"94e3cd78689e6653b326d16589b5bb552d16215707a57b71a10bba8af0e1f7d8"} Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.732361 4803 scope.go:117] "RemoveContainer" containerID="640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.732459 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cb97b886d-8vwwj" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.758536 4803 scope.go:117] "RemoveContainer" containerID="640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d" Jan 27 22:13:00 crc kubenswrapper[4803]: E0127 22:13:00.758996 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d\": container with ID starting with 640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d not found: ID does not exist" containerID="640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.759052 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d"} err="failed to get container status \"640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d\": rpc error: code = NotFound desc = could not find container \"640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d\": container with ID starting with 640af11c43630eb0ce6f691ae4aa8f9e90f50088543b31a9ae0ba5bd0e63818d not found: ID does not exist" Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.788892 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6cb97b886d-8vwwj"] Jan 27 22:13:00 crc kubenswrapper[4803]: I0127 22:13:00.800101 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6cb97b886d-8vwwj"] Jan 27 22:13:01 crc kubenswrapper[4803]: I0127 22:13:01.307279 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:13:01 crc kubenswrapper[4803]: E0127 22:13:01.307712 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:13:01 crc kubenswrapper[4803]: I0127 22:13:01.742639 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82091755-eb7c-4c14-b262-0d7102b6799c","Type":"ContainerStarted","Data":"2e17679aa3d522e8046004ef8d280c9c6e69a157ed11ecd2f2a91067d2e10474"} Jan 27 22:13:01 crc kubenswrapper[4803]: I0127 22:13:01.743004 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82091755-eb7c-4c14-b262-0d7102b6799c","Type":"ContainerStarted","Data":"63b93a8afc33bc03a28f8c9e1ce3eb4360be0fb607d767221b60d43a71dfaa82"} Jan 27 22:13:02 crc kubenswrapper[4803]: I0127 22:13:02.319759 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c26f757-3e53-46c6-be8c-4a052b5f86e2" path="/var/lib/kubelet/pods/6c26f757-3e53-46c6-be8c-4a052b5f86e2/volumes" Jan 27 22:13:02 crc kubenswrapper[4803]: I0127 22:13:02.764383 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82091755-eb7c-4c14-b262-0d7102b6799c","Type":"ContainerStarted","Data":"db8313d43f328cb817e7425017d6c00b31c220877214596c865a3c1d765df5cf"} Jan 27 22:13:03 crc kubenswrapper[4803]: I0127 22:13:03.776082 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82091755-eb7c-4c14-b262-0d7102b6799c","Type":"ContainerStarted","Data":"a37d41e2e0ae47438ff83d2e6eee0976e50c882c69109fa80e66eaacfc3b80ba"} Jan 27 22:13:03 crc kubenswrapper[4803]: I0127 22:13:03.777796 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 22:13:03 crc kubenswrapper[4803]: I0127 22:13:03.803247 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.482511267 podStartE2EDuration="4.803227294s" podCreationTimestamp="2026-01-27 22:12:59 +0000 UTC" firstStartedPulling="2026-01-27 22:12:59.977516043 +0000 UTC m=+1532.393537742" lastFinishedPulling="2026-01-27 22:13:03.29823207 +0000 UTC m=+1535.714253769" observedRunningTime="2026-01-27 22:13:03.796207206 +0000 UTC m=+1536.212228905" watchObservedRunningTime="2026-01-27 22:13:03.803227294 +0000 UTC m=+1536.219248993" Jan 27 22:13:05 crc kubenswrapper[4803]: I0127 22:13:05.993647 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.538313 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-w8rk6"] Jan 27 22:13:06 crc kubenswrapper[4803]: E0127 22:13:06.538794 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c26f757-3e53-46c6-be8c-4a052b5f86e2" containerName="heat-engine" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.538809 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c26f757-3e53-46c6-be8c-4a052b5f86e2" containerName="heat-engine" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.539024 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c26f757-3e53-46c6-be8c-4a052b5f86e2" containerName="heat-engine" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.539768 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.542255 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.542346 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.581889 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-w8rk6"] Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.628327 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-scripts\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.628694 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-config-data\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.628796 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkh9k\" (UniqueName: \"kubernetes.io/projected/45a4597f-3096-45fc-9383-7f891d163110-kube-api-access-wkh9k\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.628828 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.690563 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.692768 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.702676 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.710924 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.731059 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-scripts\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.731138 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-config-data\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.731234 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkh9k\" (UniqueName: \"kubernetes.io/projected/45a4597f-3096-45fc-9383-7f891d163110-kube-api-access-wkh9k\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.731263 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.742731 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-scripts\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.751650 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-config-data\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.758949 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.794239 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkh9k\" (UniqueName: \"kubernetes.io/projected/45a4597f-3096-45fc-9383-7f891d163110-kube-api-access-wkh9k\") pod \"nova-cell0-cell-mapping-w8rk6\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.835222 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v6tn\" (UniqueName: \"kubernetes.io/projected/0d6ab0f0-3da9-4249-8225-3da652f4af33-kube-api-access-6v6tn\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.835308 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.835391 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-config-data\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.835464 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d6ab0f0-3da9-4249-8225-3da652f4af33-logs\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.873613 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.894915 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.902254 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.909540 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.938970 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d6ab0f0-3da9-4249-8225-3da652f4af33-logs\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.939086 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v6tn\" (UniqueName: \"kubernetes.io/projected/0d6ab0f0-3da9-4249-8225-3da652f4af33-kube-api-access-6v6tn\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.939159 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.939220 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-config-data\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.941512 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d6ab0f0-3da9-4249-8225-3da652f4af33-logs\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.941553 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.955047 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-config-data\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.979398 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.983356 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 22:13:06 crc kubenswrapper[4803]: I0127 22:13:06.997840 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.005308 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.030860 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.040378 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v6tn\" (UniqueName: \"kubernetes.io/projected/0d6ab0f0-3da9-4249-8225-3da652f4af33-kube-api-access-6v6tn\") pod \"nova-api-0\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " pod="openstack/nova-api-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.041785 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp99n\" (UniqueName: \"kubernetes.io/projected/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-kube-api-access-kp99n\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.041907 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-config-data\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.041935 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-logs\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.042025 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.112719 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.114228 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.119373 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.148943 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp99n\" (UniqueName: \"kubernetes.io/projected/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-kube-api-access-kp99n\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.149092 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-config-data\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.149124 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-logs\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.149164 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.149290 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.149893 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.149952 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrrc\" (UniqueName: \"kubernetes.io/projected/42e5e272-92f3-43a7-8084-c7f1e697b9f3-kube-api-access-nkrrc\") pod \"nova-scheduler-0\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.151231 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-logs\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.153837 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.159716 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-config-data\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.169007 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.195601 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp99n\" (UniqueName: \"kubernetes.io/projected/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-kube-api-access-kp99n\") pod \"nova-metadata-0\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.213997 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pf55t"] Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.216258 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.216299 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.237106 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pf55t"] Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.252026 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.252138 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.252166 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.252220 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrrc\" (UniqueName: \"kubernetes.io/projected/42e5e272-92f3-43a7-8084-c7f1e697b9f3-kube-api-access-nkrrc\") pod \"nova-scheduler-0\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.252418 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.252446 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dnt\" (UniqueName: \"kubernetes.io/projected/3e35b785-4c7d-4677-bd3c-8642931036c0-kube-api-access-w9dnt\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.263587 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.264092 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.272002 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrrc\" (UniqueName: \"kubernetes.io/projected/42e5e272-92f3-43a7-8084-c7f1e697b9f3-kube-api-access-nkrrc\") pod \"nova-scheduler-0\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.355117 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dnt\" (UniqueName: \"kubernetes.io/projected/3e35b785-4c7d-4677-bd3c-8642931036c0-kube-api-access-w9dnt\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.355206 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.355228 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.355271 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.355310 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.355336 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.355450 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-config\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.355482 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwl2\" (UniqueName: \"kubernetes.io/projected/70bbf6db-858e-41ec-a079-876f60dc0501-kube-api-access-2hwl2\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.355517 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.370864 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.371346 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.374268 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dnt\" (UniqueName: \"kubernetes.io/projected/3e35b785-4c7d-4677-bd3c-8642931036c0-kube-api-access-w9dnt\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.442036 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.457521 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.457618 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.457793 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-config\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.457865 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hwl2\" (UniqueName: \"kubernetes.io/projected/70bbf6db-858e-41ec-a079-876f60dc0501-kube-api-access-2hwl2\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.457905 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.458035 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.459930 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.459947 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.460988 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.461488 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-config\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.462395 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.484448 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.486761 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hwl2\" (UniqueName: \"kubernetes.io/projected/70bbf6db-858e-41ec-a079-876f60dc0501-kube-api-access-2hwl2\") pod \"dnsmasq-dns-568d7fd7cf-pf55t\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.573171 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.581571 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.678225 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-w8rk6"] Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.853376 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:07 crc kubenswrapper[4803]: I0127 22:13:07.928024 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w8rk6" event={"ID":"45a4597f-3096-45fc-9383-7f891d163110","Type":"ContainerStarted","Data":"83417b4463eb5535aa02c8199f774a14a5f9e2e195e84188d5820bb15094c515"} Jan 27 22:13:08 crc kubenswrapper[4803]: I0127 22:13:08.471313 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:13:08 crc kubenswrapper[4803]: I0127 22:13:08.924348 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 22:13:08 crc kubenswrapper[4803]: I0127 22:13:08.938069 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:13:08 crc kubenswrapper[4803]: W0127 22:13:08.951823 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e35b785_4c7d_4677_bd3c_8642931036c0.slice/crio-a40d2c9e7d89a55280e1287e16d7f29553f0fff170660d9e638cd3cde2b8bc56 WatchSource:0}: Error finding container a40d2c9e7d89a55280e1287e16d7f29553f0fff170660d9e638cd3cde2b8bc56: Status 404 returned error can't find the container with id a40d2c9e7d89a55280e1287e16d7f29553f0fff170660d9e638cd3cde2b8bc56 Jan 27 22:13:08 crc kubenswrapper[4803]: I0127 22:13:08.974331 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w8rk6" event={"ID":"45a4597f-3096-45fc-9383-7f891d163110","Type":"ContainerStarted","Data":"45f7e908d8f9f431a81ef47da5b52c27f94ec80deac8382e7a40d9754b781494"} Jan 27 22:13:08 crc kubenswrapper[4803]: I0127 22:13:08.976209 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc","Type":"ContainerStarted","Data":"255c392a8cc2639bc55ff09de69c714845d01f473a3f5720d1411ea89738a019"} Jan 27 22:13:08 crc kubenswrapper[4803]: I0127 22:13:08.976975 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d6ab0f0-3da9-4249-8225-3da652f4af33","Type":"ContainerStarted","Data":"e9cef74a80d5ee19b4bc3151ec72274cd3bcccb01b79af1f814ea69db93bb3a7"} Jan 27 22:13:08 crc kubenswrapper[4803]: W0127 22:13:08.988035 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70bbf6db_858e_41ec_a079_876f60dc0501.slice/crio-041f6e9d1021ffdcc5f60b30ab67569f55a68526fee231f2fb5f6670921f629a WatchSource:0}: Error finding container 041f6e9d1021ffdcc5f60b30ab67569f55a68526fee231f2fb5f6670921f629a: Status 404 returned error can't find the container with id 041f6e9d1021ffdcc5f60b30ab67569f55a68526fee231f2fb5f6670921f629a Jan 27 22:13:08 crc kubenswrapper[4803]: I0127 22:13:08.990625 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pf55t"] Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.045892 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-w8rk6" podStartSLOduration=3.045870071 podStartE2EDuration="3.045870071s" podCreationTimestamp="2026-01-27 22:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:13:09.00420324 +0000 UTC m=+1541.420224939" watchObservedRunningTime="2026-01-27 22:13:09.045870071 +0000 UTC m=+1541.461891770" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.152022 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zgfkn"] Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.154080 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.156523 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.160833 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.179138 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zgfkn"] Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.336997 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-scripts\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.337160 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdksf\" (UniqueName: \"kubernetes.io/projected/50cb0429-fb71-444b-8fcd-d78847af272a-kube-api-access-rdksf\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.337354 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.337419 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-config-data\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.439708 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-scripts\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.439881 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdksf\" (UniqueName: \"kubernetes.io/projected/50cb0429-fb71-444b-8fcd-d78847af272a-kube-api-access-rdksf\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.440087 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.440147 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-config-data\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.447997 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-scripts\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.448596 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.454818 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-config-data\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.466582 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdksf\" (UniqueName: \"kubernetes.io/projected/50cb0429-fb71-444b-8fcd-d78847af272a-kube-api-access-rdksf\") pod \"nova-cell1-conductor-db-sync-zgfkn\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:09 crc kubenswrapper[4803]: I0127 22:13:09.506464 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:10 crc kubenswrapper[4803]: I0127 22:13:10.026571 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42e5e272-92f3-43a7-8084-c7f1e697b9f3","Type":"ContainerStarted","Data":"6a004aa1005e2eccde99ff81b2332dd39fc565e5b266fe386f1239341e511792"} Jan 27 22:13:10 crc kubenswrapper[4803]: I0127 22:13:10.041561 4803 generic.go:334] "Generic (PLEG): container finished" podID="70bbf6db-858e-41ec-a079-876f60dc0501" containerID="ba5ad5a4e78c61b237990870658214e5e203e25e481ba94f6a9f97210ebc082e" exitCode=0 Jan 27 22:13:10 crc kubenswrapper[4803]: I0127 22:13:10.041637 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" event={"ID":"70bbf6db-858e-41ec-a079-876f60dc0501","Type":"ContainerDied","Data":"ba5ad5a4e78c61b237990870658214e5e203e25e481ba94f6a9f97210ebc082e"} Jan 27 22:13:10 crc kubenswrapper[4803]: I0127 22:13:10.041665 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" event={"ID":"70bbf6db-858e-41ec-a079-876f60dc0501","Type":"ContainerStarted","Data":"041f6e9d1021ffdcc5f60b30ab67569f55a68526fee231f2fb5f6670921f629a"} Jan 27 22:13:10 crc kubenswrapper[4803]: I0127 22:13:10.052907 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e35b785-4c7d-4677-bd3c-8642931036c0","Type":"ContainerStarted","Data":"a40d2c9e7d89a55280e1287e16d7f29553f0fff170660d9e638cd3cde2b8bc56"} Jan 27 22:13:10 crc kubenswrapper[4803]: I0127 22:13:10.123612 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zgfkn"] Jan 27 22:13:10 crc kubenswrapper[4803]: I0127 22:13:10.770385 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 22:13:10 crc kubenswrapper[4803]: I0127 22:13:10.790760 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:13:11 crc kubenswrapper[4803]: I0127 22:13:11.092218 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zgfkn" event={"ID":"50cb0429-fb71-444b-8fcd-d78847af272a","Type":"ContainerStarted","Data":"0e7439b3f9441dab751d360345a4e43a712ecdc1feabaff926e545f40c5b1203"} Jan 27 22:13:11 crc kubenswrapper[4803]: I0127 22:13:11.092260 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zgfkn" event={"ID":"50cb0429-fb71-444b-8fcd-d78847af272a","Type":"ContainerStarted","Data":"cfbc177025eaf2f1cbcd81ed07437e817712360cfd75f2273047a7abd9aa5d96"} Jan 27 22:13:11 crc kubenswrapper[4803]: I0127 22:13:11.111578 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" event={"ID":"70bbf6db-858e-41ec-a079-876f60dc0501","Type":"ContainerStarted","Data":"7065527c8645fa3b090595903cdfd6183b57f6c8b5eaea4686b06100af778f9a"} Jan 27 22:13:11 crc kubenswrapper[4803]: I0127 22:13:11.112816 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:11 crc kubenswrapper[4803]: I0127 22:13:11.136392 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zgfkn" podStartSLOduration=2.136373285 podStartE2EDuration="2.136373285s" podCreationTimestamp="2026-01-27 22:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:13:11.111240739 +0000 UTC m=+1543.527262438" watchObservedRunningTime="2026-01-27 22:13:11.136373285 +0000 UTC m=+1543.552394974" Jan 27 22:13:11 crc kubenswrapper[4803]: I0127 22:13:11.148438 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" podStartSLOduration=5.148416869 podStartE2EDuration="5.148416869s" podCreationTimestamp="2026-01-27 22:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:13:11.140124916 +0000 UTC m=+1543.556146615" watchObservedRunningTime="2026-01-27 22:13:11.148416869 +0000 UTC m=+1543.564438558" Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.190564 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e35b785-4c7d-4677-bd3c-8642931036c0","Type":"ContainerStarted","Data":"91e6f87023c54ff05031e27b2e720b8d5d7fbd7b9e15e7132d1c3c580fe5a30d"} Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.190715 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3e35b785-4c7d-4677-bd3c-8642931036c0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://91e6f87023c54ff05031e27b2e720b8d5d7fbd7b9e15e7132d1c3c580fe5a30d" gracePeriod=30 Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.195729 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc","Type":"ContainerStarted","Data":"d2137644e16498ed9498042acf081b4e24799d067312a5dae03f4d1d622921ad"} Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.195787 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc","Type":"ContainerStarted","Data":"cc56e4fd33dd400b1b4a1bcaa618404d311d59a5357dca918e8fafbb9500d3f8"} Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.195892 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" containerName="nova-metadata-metadata" containerID="cri-o://d2137644e16498ed9498042acf081b4e24799d067312a5dae03f4d1d622921ad" gracePeriod=30 Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.195885 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" containerName="nova-metadata-log" containerID="cri-o://cc56e4fd33dd400b1b4a1bcaa618404d311d59a5357dca918e8fafbb9500d3f8" gracePeriod=30 Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.198418 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d6ab0f0-3da9-4249-8225-3da652f4af33","Type":"ContainerStarted","Data":"0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683"} Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.203212 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42e5e272-92f3-43a7-8084-c7f1e697b9f3","Type":"ContainerStarted","Data":"5be5ddbef65b7836166874c8d20e5649ff430887f63d8a27d65b72f20702de39"} Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.217775 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.709873642 podStartE2EDuration="8.217756464s" podCreationTimestamp="2026-01-27 22:13:06 +0000 UTC" firstStartedPulling="2026-01-27 22:13:08.970100483 +0000 UTC m=+1541.386122182" lastFinishedPulling="2026-01-27 22:13:13.477983305 +0000 UTC m=+1545.894005004" observedRunningTime="2026-01-27 22:13:14.20606256 +0000 UTC m=+1546.622084259" watchObservedRunningTime="2026-01-27 22:13:14.217756464 +0000 UTC m=+1546.633778163" Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.239887 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.709374629 podStartE2EDuration="8.239867489s" podCreationTimestamp="2026-01-27 22:13:06 +0000 UTC" firstStartedPulling="2026-01-27 22:13:07.913788888 +0000 UTC m=+1540.329810587" lastFinishedPulling="2026-01-27 22:13:13.444281748 +0000 UTC m=+1545.860303447" observedRunningTime="2026-01-27 22:13:14.235974484 +0000 UTC m=+1546.651996183" watchObservedRunningTime="2026-01-27 22:13:14.239867489 +0000 UTC m=+1546.655889188" Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.260901 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.763019622 podStartE2EDuration="8.260885294s" podCreationTimestamp="2026-01-27 22:13:06 +0000 UTC" firstStartedPulling="2026-01-27 22:13:08.952490319 +0000 UTC m=+1541.368512018" lastFinishedPulling="2026-01-27 22:13:13.450355991 +0000 UTC m=+1545.866377690" observedRunningTime="2026-01-27 22:13:14.255077008 +0000 UTC m=+1546.671098697" watchObservedRunningTime="2026-01-27 22:13:14.260885294 +0000 UTC m=+1546.676906993" Jan 27 22:13:14 crc kubenswrapper[4803]: I0127 22:13:14.307431 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.346352584 podStartE2EDuration="8.307411926s" podCreationTimestamp="2026-01-27 22:13:06 +0000 UTC" firstStartedPulling="2026-01-27 22:13:08.482654431 +0000 UTC m=+1540.898676130" lastFinishedPulling="2026-01-27 22:13:13.443713753 +0000 UTC m=+1545.859735472" observedRunningTime="2026-01-27 22:13:14.283103542 +0000 UTC m=+1546.699125261" watchObservedRunningTime="2026-01-27 22:13:14.307411926 +0000 UTC m=+1546.723433625" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.215356 4803 generic.go:334] "Generic (PLEG): container finished" podID="1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" containerID="cc56e4fd33dd400b1b4a1bcaa618404d311d59a5357dca918e8fafbb9500d3f8" exitCode=143 Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.215692 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc","Type":"ContainerDied","Data":"cc56e4fd33dd400b1b4a1bcaa618404d311d59a5357dca918e8fafbb9500d3f8"} Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.217863 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d6ab0f0-3da9-4249-8225-3da652f4af33","Type":"ContainerStarted","Data":"84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9"} Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.307720 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:13:15 crc kubenswrapper[4803]: E0127 22:13:15.308080 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.379815 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.380145 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="sg-core" containerID="cri-o://db8313d43f328cb817e7425017d6c00b31c220877214596c865a3c1d765df5cf" gracePeriod=30 Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.380228 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="proxy-httpd" containerID="cri-o://a37d41e2e0ae47438ff83d2e6eee0976e50c882c69109fa80e66eaacfc3b80ba" gracePeriod=30 Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.380278 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="ceilometer-central-agent" containerID="cri-o://63b93a8afc33bc03a28f8c9e1ce3eb4360be0fb607d767221b60d43a71dfaa82" gracePeriod=30 Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.380258 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="ceilometer-notification-agent" containerID="cri-o://2e17679aa3d522e8046004ef8d280c9c6e69a157ed11ecd2f2a91067d2e10474" gracePeriod=30 Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.389835 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.239:3000/\": EOF" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.785563 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j8bpn"] Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.788759 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.814110 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8bpn"] Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.845505 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-utilities\") pod \"certified-operators-j8bpn\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.845716 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g69w9\" (UniqueName: \"kubernetes.io/projected/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-kube-api-access-g69w9\") pod \"certified-operators-j8bpn\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.845896 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-catalog-content\") pod \"certified-operators-j8bpn\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.948146 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-catalog-content\") pod \"certified-operators-j8bpn\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.948290 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-utilities\") pod \"certified-operators-j8bpn\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.948378 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g69w9\" (UniqueName: \"kubernetes.io/projected/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-kube-api-access-g69w9\") pod \"certified-operators-j8bpn\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.949113 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-catalog-content\") pod \"certified-operators-j8bpn\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.949122 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-utilities\") pod \"certified-operators-j8bpn\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:15 crc kubenswrapper[4803]: I0127 22:13:15.974724 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g69w9\" (UniqueName: \"kubernetes.io/projected/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-kube-api-access-g69w9\") pod \"certified-operators-j8bpn\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:16 crc kubenswrapper[4803]: I0127 22:13:16.107621 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:16 crc kubenswrapper[4803]: I0127 22:13:16.257226 4803 generic.go:334] "Generic (PLEG): container finished" podID="82091755-eb7c-4c14-b262-0d7102b6799c" containerID="a37d41e2e0ae47438ff83d2e6eee0976e50c882c69109fa80e66eaacfc3b80ba" exitCode=0 Jan 27 22:13:16 crc kubenswrapper[4803]: I0127 22:13:16.257577 4803 generic.go:334] "Generic (PLEG): container finished" podID="82091755-eb7c-4c14-b262-0d7102b6799c" containerID="db8313d43f328cb817e7425017d6c00b31c220877214596c865a3c1d765df5cf" exitCode=2 Jan 27 22:13:16 crc kubenswrapper[4803]: I0127 22:13:16.257587 4803 generic.go:334] "Generic (PLEG): container finished" podID="82091755-eb7c-4c14-b262-0d7102b6799c" containerID="2e17679aa3d522e8046004ef8d280c9c6e69a157ed11ecd2f2a91067d2e10474" exitCode=0 Jan 27 22:13:16 crc kubenswrapper[4803]: I0127 22:13:16.257594 4803 generic.go:334] "Generic (PLEG): container finished" podID="82091755-eb7c-4c14-b262-0d7102b6799c" containerID="63b93a8afc33bc03a28f8c9e1ce3eb4360be0fb607d767221b60d43a71dfaa82" exitCode=0 Jan 27 22:13:16 crc kubenswrapper[4803]: I0127 22:13:16.257645 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82091755-eb7c-4c14-b262-0d7102b6799c","Type":"ContainerDied","Data":"a37d41e2e0ae47438ff83d2e6eee0976e50c882c69109fa80e66eaacfc3b80ba"} Jan 27 22:13:16 crc kubenswrapper[4803]: I0127 22:13:16.257698 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82091755-eb7c-4c14-b262-0d7102b6799c","Type":"ContainerDied","Data":"db8313d43f328cb817e7425017d6c00b31c220877214596c865a3c1d765df5cf"} Jan 27 22:13:16 crc kubenswrapper[4803]: I0127 22:13:16.257708 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82091755-eb7c-4c14-b262-0d7102b6799c","Type":"ContainerDied","Data":"2e17679aa3d522e8046004ef8d280c9c6e69a157ed11ecd2f2a91067d2e10474"} Jan 27 22:13:16 crc kubenswrapper[4803]: I0127 22:13:16.257717 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82091755-eb7c-4c14-b262-0d7102b6799c","Type":"ContainerDied","Data":"63b93a8afc33bc03a28f8c9e1ce3eb4360be0fb607d767221b60d43a71dfaa82"} Jan 27 22:13:16 crc kubenswrapper[4803]: I0127 22:13:16.768420 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8bpn"] Jan 27 22:13:16 crc kubenswrapper[4803]: I0127 22:13:16.994056 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.176576 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-scripts\") pod \"82091755-eb7c-4c14-b262-0d7102b6799c\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.177027 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-combined-ca-bundle\") pod \"82091755-eb7c-4c14-b262-0d7102b6799c\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.177062 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-run-httpd\") pod \"82091755-eb7c-4c14-b262-0d7102b6799c\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.177171 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-sg-core-conf-yaml\") pod \"82091755-eb7c-4c14-b262-0d7102b6799c\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.177196 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-log-httpd\") pod \"82091755-eb7c-4c14-b262-0d7102b6799c\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.177229 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-config-data\") pod \"82091755-eb7c-4c14-b262-0d7102b6799c\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.177281 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r72j\" (UniqueName: \"kubernetes.io/projected/82091755-eb7c-4c14-b262-0d7102b6799c-kube-api-access-9r72j\") pod \"82091755-eb7c-4c14-b262-0d7102b6799c\" (UID: \"82091755-eb7c-4c14-b262-0d7102b6799c\") " Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.178803 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "82091755-eb7c-4c14-b262-0d7102b6799c" (UID: "82091755-eb7c-4c14-b262-0d7102b6799c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.178909 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "82091755-eb7c-4c14-b262-0d7102b6799c" (UID: "82091755-eb7c-4c14-b262-0d7102b6799c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.183550 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-scripts" (OuterVolumeSpecName: "scripts") pod "82091755-eb7c-4c14-b262-0d7102b6799c" (UID: "82091755-eb7c-4c14-b262-0d7102b6799c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.183892 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82091755-eb7c-4c14-b262-0d7102b6799c-kube-api-access-9r72j" (OuterVolumeSpecName: "kube-api-access-9r72j") pod "82091755-eb7c-4c14-b262-0d7102b6799c" (UID: "82091755-eb7c-4c14-b262-0d7102b6799c"). InnerVolumeSpecName "kube-api-access-9r72j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.217861 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.218084 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.224965 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "82091755-eb7c-4c14-b262-0d7102b6799c" (UID: "82091755-eb7c-4c14-b262-0d7102b6799c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.293565 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82091755-eb7c-4c14-b262-0d7102b6799c","Type":"ContainerDied","Data":"e2ccb9066c5732f2218cecf26f3c16b172adafdc0bb273cf9e1c569db56fb7f5"} Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.293620 4803 scope.go:117] "RemoveContainer" containerID="a37d41e2e0ae47438ff83d2e6eee0976e50c882c69109fa80e66eaacfc3b80ba" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.293760 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.298253 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.298382 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.298396 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.298430 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82091755-eb7c-4c14-b262-0d7102b6799c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.298440 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r72j\" (UniqueName: \"kubernetes.io/projected/82091755-eb7c-4c14-b262-0d7102b6799c-kube-api-access-9r72j\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.304711 4803 generic.go:334] "Generic (PLEG): container finished" podID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" containerID="c4674eb2e0d5192822d2839c4b768a40c93c9d89d04cb83120891619d59121f2" exitCode=0 Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.308175 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8bpn" event={"ID":"58b89810-ff58-4c6f-941a-9c3c85bb8f5f","Type":"ContainerDied","Data":"c4674eb2e0d5192822d2839c4b768a40c93c9d89d04cb83120891619d59121f2"} Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.308351 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8bpn" event={"ID":"58b89810-ff58-4c6f-941a-9c3c85bb8f5f","Type":"ContainerStarted","Data":"d43f6efde41ffb268c741c66ce08eb38b013cc38d7526f9543dbbbfa4d207692"} Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.356157 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82091755-eb7c-4c14-b262-0d7102b6799c" (UID: "82091755-eb7c-4c14-b262-0d7102b6799c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.359756 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-config-data" (OuterVolumeSpecName: "config-data") pod "82091755-eb7c-4c14-b262-0d7102b6799c" (UID: "82091755-eb7c-4c14-b262-0d7102b6799c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.371607 4803 scope.go:117] "RemoveContainer" containerID="db8313d43f328cb817e7425017d6c00b31c220877214596c865a3c1d765df5cf" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.403495 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.403529 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82091755-eb7c-4c14-b262-0d7102b6799c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.404991 4803 scope.go:117] "RemoveContainer" containerID="2e17679aa3d522e8046004ef8d280c9c6e69a157ed11ecd2f2a91067d2e10474" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.433908 4803 scope.go:117] "RemoveContainer" containerID="63b93a8afc33bc03a28f8c9e1ce3eb4360be0fb607d767221b60d43a71dfaa82" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.442234 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.442299 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.486187 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.486235 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.539761 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.575297 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.584084 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.659674 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.673432 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.688291 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:17 crc kubenswrapper[4803]: E0127 22:13:17.688922 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="ceilometer-central-agent" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.688940 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="ceilometer-central-agent" Jan 27 22:13:17 crc kubenswrapper[4803]: E0127 22:13:17.688966 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="ceilometer-notification-agent" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.688973 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="ceilometer-notification-agent" Jan 27 22:13:17 crc kubenswrapper[4803]: E0127 22:13:17.688986 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="proxy-httpd" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.688992 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="proxy-httpd" Jan 27 22:13:17 crc kubenswrapper[4803]: E0127 22:13:17.688999 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="sg-core" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.689005 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="sg-core" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.689205 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="sg-core" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.689223 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="proxy-httpd" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.689234 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="ceilometer-central-agent" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.689260 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" containerName="ceilometer-notification-agent" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.691453 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.698688 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.698951 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.720548 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.757736 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-z6ndt"] Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.758021 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" podUID="c8eef822-1016-48a2-8073-99d10757edf5" containerName="dnsmasq-dns" containerID="cri-o://dc6f07943553d51f747eb4007c810e41784703f83f0fd88387073fd56463eb6b" gracePeriod=10 Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.812609 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-config-data\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.812704 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.812727 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-run-httpd\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.812819 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.812862 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-scripts\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.812906 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-log-httpd\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.812933 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4f5x\" (UniqueName: \"kubernetes.io/projected/d41adc57-850e-4967-aa19-042dc8e991f9-kube-api-access-q4f5x\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.915007 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.915076 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-scripts\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.915141 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-log-httpd\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.915179 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4f5x\" (UniqueName: \"kubernetes.io/projected/d41adc57-850e-4967-aa19-042dc8e991f9-kube-api-access-q4f5x\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.915235 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-config-data\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.915776 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.915808 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-run-httpd\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.916632 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-run-httpd\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.923550 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.925463 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-log-httpd\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.928546 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.929949 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-config-data\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.940645 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-scripts\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:17 crc kubenswrapper[4803]: I0127 22:13:17.947746 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4f5x\" (UniqueName: \"kubernetes.io/projected/d41adc57-850e-4967-aa19-042dc8e991f9-kube-api-access-q4f5x\") pod \"ceilometer-0\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " pod="openstack/ceilometer-0" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.006912 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-trf25"] Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.008884 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-trf25" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.030735 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-trf25"] Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.031587 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.036915 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" podUID="c8eef822-1016-48a2-8073-99d10757edf5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.228:5353: connect: connection refused" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.099967 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0eea-account-create-update-czw5l"] Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.101516 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0eea-account-create-update-czw5l" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.107723 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.124553 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21340627-fe1d-49aa-898e-d11730736b41-operator-scripts\") pod \"aodh-db-create-trf25\" (UID: \"21340627-fe1d-49aa-898e-d11730736b41\") " pod="openstack/aodh-db-create-trf25" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.124599 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wms7s\" (UniqueName: \"kubernetes.io/projected/21340627-fe1d-49aa-898e-d11730736b41-kube-api-access-wms7s\") pod \"aodh-db-create-trf25\" (UID: \"21340627-fe1d-49aa-898e-d11730736b41\") " pod="openstack/aodh-db-create-trf25" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.124860 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0eea-account-create-update-czw5l"] Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.228272 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21340627-fe1d-49aa-898e-d11730736b41-operator-scripts\") pod \"aodh-db-create-trf25\" (UID: \"21340627-fe1d-49aa-898e-d11730736b41\") " pod="openstack/aodh-db-create-trf25" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.228318 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wms7s\" (UniqueName: \"kubernetes.io/projected/21340627-fe1d-49aa-898e-d11730736b41-kube-api-access-wms7s\") pod \"aodh-db-create-trf25\" (UID: \"21340627-fe1d-49aa-898e-d11730736b41\") " pod="openstack/aodh-db-create-trf25" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.228401 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54c72732-3cce-4113-98c5-cde54f72156f-operator-scripts\") pod \"aodh-0eea-account-create-update-czw5l\" (UID: \"54c72732-3cce-4113-98c5-cde54f72156f\") " pod="openstack/aodh-0eea-account-create-update-czw5l" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.228465 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdnmc\" (UniqueName: \"kubernetes.io/projected/54c72732-3cce-4113-98c5-cde54f72156f-kube-api-access-sdnmc\") pod \"aodh-0eea-account-create-update-czw5l\" (UID: \"54c72732-3cce-4113-98c5-cde54f72156f\") " pod="openstack/aodh-0eea-account-create-update-czw5l" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.229457 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21340627-fe1d-49aa-898e-d11730736b41-operator-scripts\") pod \"aodh-db-create-trf25\" (UID: \"21340627-fe1d-49aa-898e-d11730736b41\") " pod="openstack/aodh-db-create-trf25" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.261584 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wms7s\" (UniqueName: \"kubernetes.io/projected/21340627-fe1d-49aa-898e-d11730736b41-kube-api-access-wms7s\") pod \"aodh-db-create-trf25\" (UID: \"21340627-fe1d-49aa-898e-d11730736b41\") " pod="openstack/aodh-db-create-trf25" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.300924 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.241:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.301166 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.241:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.515926 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54c72732-3cce-4113-98c5-cde54f72156f-operator-scripts\") pod \"aodh-0eea-account-create-update-czw5l\" (UID: \"54c72732-3cce-4113-98c5-cde54f72156f\") " pod="openstack/aodh-0eea-account-create-update-czw5l" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.516352 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdnmc\" (UniqueName: \"kubernetes.io/projected/54c72732-3cce-4113-98c5-cde54f72156f-kube-api-access-sdnmc\") pod \"aodh-0eea-account-create-update-czw5l\" (UID: \"54c72732-3cce-4113-98c5-cde54f72156f\") " pod="openstack/aodh-0eea-account-create-update-czw5l" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.517459 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-trf25" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.519786 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54c72732-3cce-4113-98c5-cde54f72156f-operator-scripts\") pod \"aodh-0eea-account-create-update-czw5l\" (UID: \"54c72732-3cce-4113-98c5-cde54f72156f\") " pod="openstack/aodh-0eea-account-create-update-czw5l" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.562631 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82091755-eb7c-4c14-b262-0d7102b6799c" path="/var/lib/kubelet/pods/82091755-eb7c-4c14-b262-0d7102b6799c/volumes" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.564049 4803 generic.go:334] "Generic (PLEG): container finished" podID="c8eef822-1016-48a2-8073-99d10757edf5" containerID="dc6f07943553d51f747eb4007c810e41784703f83f0fd88387073fd56463eb6b" exitCode=0 Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.597555 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdnmc\" (UniqueName: \"kubernetes.io/projected/54c72732-3cce-4113-98c5-cde54f72156f-kube-api-access-sdnmc\") pod \"aodh-0eea-account-create-update-czw5l\" (UID: \"54c72732-3cce-4113-98c5-cde54f72156f\") " pod="openstack/aodh-0eea-account-create-update-czw5l" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.602605 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" event={"ID":"c8eef822-1016-48a2-8073-99d10757edf5","Type":"ContainerDied","Data":"dc6f07943553d51f747eb4007c810e41784703f83f0fd88387073fd56463eb6b"} Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.671812 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.786112 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0eea-account-create-update-czw5l" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.837715 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.967918 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-svc\") pod \"c8eef822-1016-48a2-8073-99d10757edf5\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.967959 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-config\") pod \"c8eef822-1016-48a2-8073-99d10757edf5\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.970730 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-nb\") pod \"c8eef822-1016-48a2-8073-99d10757edf5\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.970795 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-swift-storage-0\") pod \"c8eef822-1016-48a2-8073-99d10757edf5\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.970904 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-sb\") pod \"c8eef822-1016-48a2-8073-99d10757edf5\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.970935 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf2bx\" (UniqueName: \"kubernetes.io/projected/c8eef822-1016-48a2-8073-99d10757edf5-kube-api-access-rf2bx\") pod \"c8eef822-1016-48a2-8073-99d10757edf5\" (UID: \"c8eef822-1016-48a2-8073-99d10757edf5\") " Jan 27 22:13:18 crc kubenswrapper[4803]: I0127 22:13:18.980068 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8eef822-1016-48a2-8073-99d10757edf5-kube-api-access-rf2bx" (OuterVolumeSpecName: "kube-api-access-rf2bx") pod "c8eef822-1016-48a2-8073-99d10757edf5" (UID: "c8eef822-1016-48a2-8073-99d10757edf5"). InnerVolumeSpecName "kube-api-access-rf2bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.023429 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.073519 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf2bx\" (UniqueName: \"kubernetes.io/projected/c8eef822-1016-48a2-8073-99d10757edf5-kube-api-access-rf2bx\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.518152 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-trf25"] Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.530067 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8eef822-1016-48a2-8073-99d10757edf5" (UID: "c8eef822-1016-48a2-8073-99d10757edf5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.549285 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c8eef822-1016-48a2-8073-99d10757edf5" (UID: "c8eef822-1016-48a2-8073-99d10757edf5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.549536 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8eef822-1016-48a2-8073-99d10757edf5" (UID: "c8eef822-1016-48a2-8073-99d10757edf5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.550744 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-config" (OuterVolumeSpecName: "config") pod "c8eef822-1016-48a2-8073-99d10757edf5" (UID: "c8eef822-1016-48a2-8073-99d10757edf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.579411 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8eef822-1016-48a2-8073-99d10757edf5" (UID: "c8eef822-1016-48a2-8073-99d10757edf5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.588614 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.588650 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.588659 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.588668 4803 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.588678 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8eef822-1016-48a2-8073-99d10757edf5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.705046 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-trf25" event={"ID":"21340627-fe1d-49aa-898e-d11730736b41","Type":"ContainerStarted","Data":"fa37edc9dd5a6eac8f477abe553f2db49076765d43429806fd7251bda8d61b22"} Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.723996 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" event={"ID":"c8eef822-1016-48a2-8073-99d10757edf5","Type":"ContainerDied","Data":"dab0327053d901e99214daf85cf81cc6c14ae5f05c66f7e9b90d442850cfa419"} Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.724646 4803 scope.go:117] "RemoveContainer" containerID="dc6f07943553d51f747eb4007c810e41784703f83f0fd88387073fd56463eb6b" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.724052 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-z6ndt" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.732916 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41adc57-850e-4967-aa19-042dc8e991f9","Type":"ContainerStarted","Data":"1393d63576fe013b2e3adc36449d0a020fe7d9784f03e16b3e8ce8c493ab565b"} Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.752391 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8bpn" event={"ID":"58b89810-ff58-4c6f-941a-9c3c85bb8f5f","Type":"ContainerStarted","Data":"d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9"} Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.791029 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-z6ndt"] Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.799130 4803 scope.go:117] "RemoveContainer" containerID="a4cd8789282c3e67012cc35a05248a406727994e81356e6d30a12b19c08d74e8" Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.803903 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-z6ndt"] Jan 27 22:13:19 crc kubenswrapper[4803]: I0127 22:13:19.813495 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0eea-account-create-update-czw5l"] Jan 27 22:13:20 crc kubenswrapper[4803]: I0127 22:13:20.325935 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8eef822-1016-48a2-8073-99d10757edf5" path="/var/lib/kubelet/pods/c8eef822-1016-48a2-8073-99d10757edf5/volumes" Jan 27 22:13:20 crc kubenswrapper[4803]: I0127 22:13:20.815012 4803 generic.go:334] "Generic (PLEG): container finished" podID="54c72732-3cce-4113-98c5-cde54f72156f" containerID="b02de6fdab70afb12bbf5b06c8d12e44efc27f31f2264ff76e11c619a0c725e4" exitCode=0 Jan 27 22:13:20 crc kubenswrapper[4803]: I0127 22:13:20.818700 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0eea-account-create-update-czw5l" event={"ID":"54c72732-3cce-4113-98c5-cde54f72156f","Type":"ContainerDied","Data":"b02de6fdab70afb12bbf5b06c8d12e44efc27f31f2264ff76e11c619a0c725e4"} Jan 27 22:13:20 crc kubenswrapper[4803]: I0127 22:13:20.818776 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0eea-account-create-update-czw5l" event={"ID":"54c72732-3cce-4113-98c5-cde54f72156f","Type":"ContainerStarted","Data":"3bcadf381b17304ec47d3af1857ec7f92cbffc270551539125ff77cf61a69eae"} Jan 27 22:13:20 crc kubenswrapper[4803]: I0127 22:13:20.825918 4803 generic.go:334] "Generic (PLEG): container finished" podID="21340627-fe1d-49aa-898e-d11730736b41" containerID="004e9c75b67a035186c66deee967d3772d96ad0a67c77cc195461f0aaa27f00c" exitCode=0 Jan 27 22:13:20 crc kubenswrapper[4803]: I0127 22:13:20.827035 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-trf25" event={"ID":"21340627-fe1d-49aa-898e-d11730736b41","Type":"ContainerDied","Data":"004e9c75b67a035186c66deee967d3772d96ad0a67c77cc195461f0aaa27f00c"} Jan 27 22:13:20 crc kubenswrapper[4803]: I0127 22:13:20.840178 4803 generic.go:334] "Generic (PLEG): container finished" podID="45a4597f-3096-45fc-9383-7f891d163110" containerID="45f7e908d8f9f431a81ef47da5b52c27f94ec80deac8382e7a40d9754b781494" exitCode=0 Jan 27 22:13:20 crc kubenswrapper[4803]: I0127 22:13:20.840276 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w8rk6" event={"ID":"45a4597f-3096-45fc-9383-7f891d163110","Type":"ContainerDied","Data":"45f7e908d8f9f431a81ef47da5b52c27f94ec80deac8382e7a40d9754b781494"} Jan 27 22:13:20 crc kubenswrapper[4803]: I0127 22:13:20.875925 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41adc57-850e-4967-aa19-042dc8e991f9","Type":"ContainerStarted","Data":"19a49f108ef5c15b9d40503cacc12835e41a96b2636bcb2fbdd620c587609951"} Jan 27 22:13:21 crc kubenswrapper[4803]: I0127 22:13:21.907169 4803 generic.go:334] "Generic (PLEG): container finished" podID="50cb0429-fb71-444b-8fcd-d78847af272a" containerID="0e7439b3f9441dab751d360345a4e43a712ecdc1feabaff926e545f40c5b1203" exitCode=0 Jan 27 22:13:21 crc kubenswrapper[4803]: I0127 22:13:21.907509 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zgfkn" event={"ID":"50cb0429-fb71-444b-8fcd-d78847af272a","Type":"ContainerDied","Data":"0e7439b3f9441dab751d360345a4e43a712ecdc1feabaff926e545f40c5b1203"} Jan 27 22:13:21 crc kubenswrapper[4803]: I0127 22:13:21.911437 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41adc57-850e-4967-aa19-042dc8e991f9","Type":"ContainerStarted","Data":"e1e601996a5a07fc2678c0d069c16fc1022fb0b1a337fa336574dcbb61f94b10"} Jan 27 22:13:21 crc kubenswrapper[4803]: I0127 22:13:21.919891 4803 generic.go:334] "Generic (PLEG): container finished" podID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" containerID="d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9" exitCode=0 Jan 27 22:13:21 crc kubenswrapper[4803]: I0127 22:13:21.920087 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8bpn" event={"ID":"58b89810-ff58-4c6f-941a-9c3c85bb8f5f","Type":"ContainerDied","Data":"d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9"} Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.479930 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.510547 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-scripts\") pod \"45a4597f-3096-45fc-9383-7f891d163110\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.510696 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-combined-ca-bundle\") pod \"45a4597f-3096-45fc-9383-7f891d163110\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.510795 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-config-data\") pod \"45a4597f-3096-45fc-9383-7f891d163110\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.510836 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkh9k\" (UniqueName: \"kubernetes.io/projected/45a4597f-3096-45fc-9383-7f891d163110-kube-api-access-wkh9k\") pod \"45a4597f-3096-45fc-9383-7f891d163110\" (UID: \"45a4597f-3096-45fc-9383-7f891d163110\") " Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.519207 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a4597f-3096-45fc-9383-7f891d163110-kube-api-access-wkh9k" (OuterVolumeSpecName: "kube-api-access-wkh9k") pod "45a4597f-3096-45fc-9383-7f891d163110" (UID: "45a4597f-3096-45fc-9383-7f891d163110"). InnerVolumeSpecName "kube-api-access-wkh9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.525708 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-scripts" (OuterVolumeSpecName: "scripts") pod "45a4597f-3096-45fc-9383-7f891d163110" (UID: "45a4597f-3096-45fc-9383-7f891d163110"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.551265 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-config-data" (OuterVolumeSpecName: "config-data") pod "45a4597f-3096-45fc-9383-7f891d163110" (UID: "45a4597f-3096-45fc-9383-7f891d163110"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.578063 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45a4597f-3096-45fc-9383-7f891d163110" (UID: "45a4597f-3096-45fc-9383-7f891d163110"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.614811 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.614868 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.614882 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a4597f-3096-45fc-9383-7f891d163110-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.614892 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkh9k\" (UniqueName: \"kubernetes.io/projected/45a4597f-3096-45fc-9383-7f891d163110-kube-api-access-wkh9k\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.879659 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0eea-account-create-update-czw5l" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.885621 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-trf25" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.923916 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21340627-fe1d-49aa-898e-d11730736b41-operator-scripts\") pod \"21340627-fe1d-49aa-898e-d11730736b41\" (UID: \"21340627-fe1d-49aa-898e-d11730736b41\") " Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.924462 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54c72732-3cce-4113-98c5-cde54f72156f-operator-scripts\") pod \"54c72732-3cce-4113-98c5-cde54f72156f\" (UID: \"54c72732-3cce-4113-98c5-cde54f72156f\") " Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.924499 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdnmc\" (UniqueName: \"kubernetes.io/projected/54c72732-3cce-4113-98c5-cde54f72156f-kube-api-access-sdnmc\") pod \"54c72732-3cce-4113-98c5-cde54f72156f\" (UID: \"54c72732-3cce-4113-98c5-cde54f72156f\") " Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.924597 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wms7s\" (UniqueName: \"kubernetes.io/projected/21340627-fe1d-49aa-898e-d11730736b41-kube-api-access-wms7s\") pod \"21340627-fe1d-49aa-898e-d11730736b41\" (UID: \"21340627-fe1d-49aa-898e-d11730736b41\") " Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.930634 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21340627-fe1d-49aa-898e-d11730736b41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21340627-fe1d-49aa-898e-d11730736b41" (UID: "21340627-fe1d-49aa-898e-d11730736b41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.930728 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54c72732-3cce-4113-98c5-cde54f72156f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54c72732-3cce-4113-98c5-cde54f72156f" (UID: "54c72732-3cce-4113-98c5-cde54f72156f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.938376 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21340627-fe1d-49aa-898e-d11730736b41-kube-api-access-wms7s" (OuterVolumeSpecName: "kube-api-access-wms7s") pod "21340627-fe1d-49aa-898e-d11730736b41" (UID: "21340627-fe1d-49aa-898e-d11730736b41"). InnerVolumeSpecName "kube-api-access-wms7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.942666 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54c72732-3cce-4113-98c5-cde54f72156f-kube-api-access-sdnmc" (OuterVolumeSpecName: "kube-api-access-sdnmc") pod "54c72732-3cce-4113-98c5-cde54f72156f" (UID: "54c72732-3cce-4113-98c5-cde54f72156f"). InnerVolumeSpecName "kube-api-access-sdnmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.950017 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0eea-account-create-update-czw5l" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.950069 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0eea-account-create-update-czw5l" event={"ID":"54c72732-3cce-4113-98c5-cde54f72156f","Type":"ContainerDied","Data":"3bcadf381b17304ec47d3af1857ec7f92cbffc270551539125ff77cf61a69eae"} Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.950133 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bcadf381b17304ec47d3af1857ec7f92cbffc270551539125ff77cf61a69eae" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.955880 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-trf25" event={"ID":"21340627-fe1d-49aa-898e-d11730736b41","Type":"ContainerDied","Data":"fa37edc9dd5a6eac8f477abe553f2db49076765d43429806fd7251bda8d61b22"} Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.956145 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa37edc9dd5a6eac8f477abe553f2db49076765d43429806fd7251bda8d61b22" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.956225 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-trf25" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.982605 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w8rk6" Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.983510 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w8rk6" event={"ID":"45a4597f-3096-45fc-9383-7f891d163110","Type":"ContainerDied","Data":"83417b4463eb5535aa02c8199f774a14a5f9e2e195e84188d5820bb15094c515"} Jan 27 22:13:22 crc kubenswrapper[4803]: I0127 22:13:22.983575 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83417b4463eb5535aa02c8199f774a14a5f9e2e195e84188d5820bb15094c515" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.005071 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41adc57-850e-4967-aa19-042dc8e991f9","Type":"ContainerStarted","Data":"7d8ea161a76fa2937f6217974a3a743f9e77fdab0d7b1fbf3a94ac21075ee3e3"} Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.027779 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21340627-fe1d-49aa-898e-d11730736b41-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.027814 4803 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54c72732-3cce-4113-98c5-cde54f72156f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.027827 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdnmc\" (UniqueName: \"kubernetes.io/projected/54c72732-3cce-4113-98c5-cde54f72156f-kube-api-access-sdnmc\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.027838 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wms7s\" (UniqueName: \"kubernetes.io/projected/21340627-fe1d-49aa-898e-d11730736b41-kube-api-access-wms7s\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.178743 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.179068 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerName="nova-api-log" containerID="cri-o://0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683" gracePeriod=30 Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.179556 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerName="nova-api-api" containerID="cri-o://84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9" gracePeriod=30 Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.199130 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.199314 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="42e5e272-92f3-43a7-8084-c7f1e697b9f3" containerName="nova-scheduler-scheduler" containerID="cri-o://5be5ddbef65b7836166874c8d20e5649ff430887f63d8a27d65b72f20702de39" gracePeriod=30 Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.403964 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.473325 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdksf\" (UniqueName: \"kubernetes.io/projected/50cb0429-fb71-444b-8fcd-d78847af272a-kube-api-access-rdksf\") pod \"50cb0429-fb71-444b-8fcd-d78847af272a\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.473421 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-scripts\") pod \"50cb0429-fb71-444b-8fcd-d78847af272a\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.473687 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-combined-ca-bundle\") pod \"50cb0429-fb71-444b-8fcd-d78847af272a\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.473742 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-config-data\") pod \"50cb0429-fb71-444b-8fcd-d78847af272a\" (UID: \"50cb0429-fb71-444b-8fcd-d78847af272a\") " Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.478056 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50cb0429-fb71-444b-8fcd-d78847af272a-kube-api-access-rdksf" (OuterVolumeSpecName: "kube-api-access-rdksf") pod "50cb0429-fb71-444b-8fcd-d78847af272a" (UID: "50cb0429-fb71-444b-8fcd-d78847af272a"). InnerVolumeSpecName "kube-api-access-rdksf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.479007 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-scripts" (OuterVolumeSpecName: "scripts") pod "50cb0429-fb71-444b-8fcd-d78847af272a" (UID: "50cb0429-fb71-444b-8fcd-d78847af272a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.511436 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-config-data" (OuterVolumeSpecName: "config-data") pod "50cb0429-fb71-444b-8fcd-d78847af272a" (UID: "50cb0429-fb71-444b-8fcd-d78847af272a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.544468 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50cb0429-fb71-444b-8fcd-d78847af272a" (UID: "50cb0429-fb71-444b-8fcd-d78847af272a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.580607 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.580669 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.580684 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdksf\" (UniqueName: \"kubernetes.io/projected/50cb0429-fb71-444b-8fcd-d78847af272a-kube-api-access-rdksf\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:23 crc kubenswrapper[4803]: I0127 22:13:23.580699 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50cb0429-fb71-444b-8fcd-d78847af272a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.014587 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 22:13:24 crc kubenswrapper[4803]: E0127 22:13:24.015085 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54c72732-3cce-4113-98c5-cde54f72156f" containerName="mariadb-account-create-update" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.015100 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="54c72732-3cce-4113-98c5-cde54f72156f" containerName="mariadb-account-create-update" Jan 27 22:13:24 crc kubenswrapper[4803]: E0127 22:13:24.015114 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cb0429-fb71-444b-8fcd-d78847af272a" containerName="nova-cell1-conductor-db-sync" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.015120 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cb0429-fb71-444b-8fcd-d78847af272a" containerName="nova-cell1-conductor-db-sync" Jan 27 22:13:24 crc kubenswrapper[4803]: E0127 22:13:24.015136 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8eef822-1016-48a2-8073-99d10757edf5" containerName="dnsmasq-dns" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.015142 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8eef822-1016-48a2-8073-99d10757edf5" containerName="dnsmasq-dns" Jan 27 22:13:24 crc kubenswrapper[4803]: E0127 22:13:24.015177 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a4597f-3096-45fc-9383-7f891d163110" containerName="nova-manage" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.015184 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a4597f-3096-45fc-9383-7f891d163110" containerName="nova-manage" Jan 27 22:13:24 crc kubenswrapper[4803]: E0127 22:13:24.015202 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21340627-fe1d-49aa-898e-d11730736b41" containerName="mariadb-database-create" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.015209 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="21340627-fe1d-49aa-898e-d11730736b41" containerName="mariadb-database-create" Jan 27 22:13:24 crc kubenswrapper[4803]: E0127 22:13:24.015226 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8eef822-1016-48a2-8073-99d10757edf5" containerName="init" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.015233 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8eef822-1016-48a2-8073-99d10757edf5" containerName="init" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.015438 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="21340627-fe1d-49aa-898e-d11730736b41" containerName="mariadb-database-create" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.015457 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a4597f-3096-45fc-9383-7f891d163110" containerName="nova-manage" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.015470 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8eef822-1016-48a2-8073-99d10757edf5" containerName="dnsmasq-dns" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.015484 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="50cb0429-fb71-444b-8fcd-d78847af272a" containerName="nova-cell1-conductor-db-sync" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.015497 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="54c72732-3cce-4113-98c5-cde54f72156f" containerName="mariadb-account-create-update" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.016322 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.017659 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zgfkn" event={"ID":"50cb0429-fb71-444b-8fcd-d78847af272a","Type":"ContainerDied","Data":"cfbc177025eaf2f1cbcd81ed07437e817712360cfd75f2273047a7abd9aa5d96"} Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.017693 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfbc177025eaf2f1cbcd81ed07437e817712360cfd75f2273047a7abd9aa5d96" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.017778 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zgfkn" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.021799 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41adc57-850e-4967-aa19-042dc8e991f9","Type":"ContainerStarted","Data":"eb63e6cdb7c9855f9c905c6505991397a9175f1ecf4d6b21005c307ddaa2f4fa"} Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.023171 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.025662 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8bpn" event={"ID":"58b89810-ff58-4c6f-941a-9c3c85bb8f5f","Type":"ContainerStarted","Data":"5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700"} Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.032349 4803 generic.go:334] "Generic (PLEG): container finished" podID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerID="0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683" exitCode=143 Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.032384 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d6ab0f0-3da9-4249-8225-3da652f4af33","Type":"ContainerDied","Data":"0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683"} Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.066425 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.085893 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j8bpn" podStartSLOduration=3.8228371 podStartE2EDuration="9.085873875s" podCreationTimestamp="2026-01-27 22:13:15 +0000 UTC" firstStartedPulling="2026-01-27 22:13:17.323714554 +0000 UTC m=+1549.739736253" lastFinishedPulling="2026-01-27 22:13:22.586751319 +0000 UTC m=+1555.002773028" observedRunningTime="2026-01-27 22:13:24.082132544 +0000 UTC m=+1556.498154243" watchObservedRunningTime="2026-01-27 22:13:24.085873875 +0000 UTC m=+1556.501895574" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.091720 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7b63c3-7320-4c4b-b099-d2f9c78abeec-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ba7b63c3-7320-4c4b-b099-d2f9c78abeec\") " pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.091789 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7b63c3-7320-4c4b-b099-d2f9c78abeec-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ba7b63c3-7320-4c4b-b099-d2f9c78abeec\") " pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.091864 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lsnp\" (UniqueName: \"kubernetes.io/projected/ba7b63c3-7320-4c4b-b099-d2f9c78abeec-kube-api-access-2lsnp\") pod \"nova-cell1-conductor-0\" (UID: \"ba7b63c3-7320-4c4b-b099-d2f9c78abeec\") " pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.106707 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.882567236 podStartE2EDuration="7.106689835s" podCreationTimestamp="2026-01-27 22:13:17 +0000 UTC" firstStartedPulling="2026-01-27 22:13:19.147006629 +0000 UTC m=+1551.563028328" lastFinishedPulling="2026-01-27 22:13:23.371129228 +0000 UTC m=+1555.787150927" observedRunningTime="2026-01-27 22:13:24.103459328 +0000 UTC m=+1556.519481027" watchObservedRunningTime="2026-01-27 22:13:24.106689835 +0000 UTC m=+1556.522711534" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.193988 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7b63c3-7320-4c4b-b099-d2f9c78abeec-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ba7b63c3-7320-4c4b-b099-d2f9c78abeec\") " pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.194052 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7b63c3-7320-4c4b-b099-d2f9c78abeec-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ba7b63c3-7320-4c4b-b099-d2f9c78abeec\") " pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.194099 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lsnp\" (UniqueName: \"kubernetes.io/projected/ba7b63c3-7320-4c4b-b099-d2f9c78abeec-kube-api-access-2lsnp\") pod \"nova-cell1-conductor-0\" (UID: \"ba7b63c3-7320-4c4b-b099-d2f9c78abeec\") " pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.200158 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7b63c3-7320-4c4b-b099-d2f9c78abeec-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ba7b63c3-7320-4c4b-b099-d2f9c78abeec\") " pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.213208 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7b63c3-7320-4c4b-b099-d2f9c78abeec-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ba7b63c3-7320-4c4b-b099-d2f9c78abeec\") " pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.227427 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lsnp\" (UniqueName: \"kubernetes.io/projected/ba7b63c3-7320-4c4b-b099-d2f9c78abeec-kube-api-access-2lsnp\") pod \"nova-cell1-conductor-0\" (UID: \"ba7b63c3-7320-4c4b-b099-d2f9c78abeec\") " pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.337431 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:24 crc kubenswrapper[4803]: I0127 22:13:24.866028 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 22:13:25 crc kubenswrapper[4803]: I0127 22:13:25.047342 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ba7b63c3-7320-4c4b-b099-d2f9c78abeec","Type":"ContainerStarted","Data":"84db1717c4bc1b04893650989bce5bcd112b4217fecc7c57c6fe9a0ce2e2ad4d"} Jan 27 22:13:26 crc kubenswrapper[4803]: I0127 22:13:26.063830 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ba7b63c3-7320-4c4b-b099-d2f9c78abeec","Type":"ContainerStarted","Data":"8139be198fc62fed0d8f4ae61a5b8ac00d334306981b0e4cbaf2aa48818d15a6"} Jan 27 22:13:26 crc kubenswrapper[4803]: I0127 22:13:26.098111 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.098087854 podStartE2EDuration="3.098087854s" podCreationTimestamp="2026-01-27 22:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:13:26.084116208 +0000 UTC m=+1558.500137917" watchObservedRunningTime="2026-01-27 22:13:26.098087854 +0000 UTC m=+1558.514109563" Jan 27 22:13:26 crc kubenswrapper[4803]: I0127 22:13:26.110630 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:26 crc kubenswrapper[4803]: I0127 22:13:26.111771 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:26 crc kubenswrapper[4803]: I0127 22:13:26.167617 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:26 crc kubenswrapper[4803]: I0127 22:13:26.978249 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.061356 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-combined-ca-bundle\") pod \"0d6ab0f0-3da9-4249-8225-3da652f4af33\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.061512 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v6tn\" (UniqueName: \"kubernetes.io/projected/0d6ab0f0-3da9-4249-8225-3da652f4af33-kube-api-access-6v6tn\") pod \"0d6ab0f0-3da9-4249-8225-3da652f4af33\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.061610 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-config-data\") pod \"0d6ab0f0-3da9-4249-8225-3da652f4af33\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.061678 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d6ab0f0-3da9-4249-8225-3da652f4af33-logs\") pod \"0d6ab0f0-3da9-4249-8225-3da652f4af33\" (UID: \"0d6ab0f0-3da9-4249-8225-3da652f4af33\") " Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.063139 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d6ab0f0-3da9-4249-8225-3da652f4af33-logs" (OuterVolumeSpecName: "logs") pod "0d6ab0f0-3da9-4249-8225-3da652f4af33" (UID: "0d6ab0f0-3da9-4249-8225-3da652f4af33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.068147 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6ab0f0-3da9-4249-8225-3da652f4af33-kube-api-access-6v6tn" (OuterVolumeSpecName: "kube-api-access-6v6tn") pod "0d6ab0f0-3da9-4249-8225-3da652f4af33" (UID: "0d6ab0f0-3da9-4249-8225-3da652f4af33"). InnerVolumeSpecName "kube-api-access-6v6tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.137962 4803 generic.go:334] "Generic (PLEG): container finished" podID="42e5e272-92f3-43a7-8084-c7f1e697b9f3" containerID="5be5ddbef65b7836166874c8d20e5649ff430887f63d8a27d65b72f20702de39" exitCode=0 Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.138045 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42e5e272-92f3-43a7-8084-c7f1e697b9f3","Type":"ContainerDied","Data":"5be5ddbef65b7836166874c8d20e5649ff430887f63d8a27d65b72f20702de39"} Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.141727 4803 generic.go:334] "Generic (PLEG): container finished" podID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerID="84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9" exitCode=0 Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.143171 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.143782 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d6ab0f0-3da9-4249-8225-3da652f4af33","Type":"ContainerDied","Data":"84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9"} Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.143917 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d6ab0f0-3da9-4249-8225-3da652f4af33","Type":"ContainerDied","Data":"e9cef74a80d5ee19b4bc3151ec72274cd3bcccb01b79af1f814ea69db93bb3a7"} Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.143991 4803 scope.go:117] "RemoveContainer" containerID="84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.144644 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.164474 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v6tn\" (UniqueName: \"kubernetes.io/projected/0d6ab0f0-3da9-4249-8225-3da652f4af33-kube-api-access-6v6tn\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.164647 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d6ab0f0-3da9-4249-8225-3da652f4af33-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.178075 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-config-data" (OuterVolumeSpecName: "config-data") pod "0d6ab0f0-3da9-4249-8225-3da652f4af33" (UID: "0d6ab0f0-3da9-4249-8225-3da652f4af33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.181185 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d6ab0f0-3da9-4249-8225-3da652f4af33" (UID: "0d6ab0f0-3da9-4249-8225-3da652f4af33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.219741 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.266762 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.267140 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d6ab0f0-3da9-4249-8225-3da652f4af33-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.274605 4803 scope.go:117] "RemoveContainer" containerID="0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.282393 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8bpn"] Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.293453 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.312021 4803 scope.go:117] "RemoveContainer" containerID="84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9" Jan 27 22:13:27 crc kubenswrapper[4803]: E0127 22:13:27.313337 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9\": container with ID starting with 84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9 not found: ID does not exist" containerID="84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.313371 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9"} err="failed to get container status \"84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9\": rpc error: code = NotFound desc = could not find container \"84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9\": container with ID starting with 84c60e9cfcca1a01040191f7a6e2b51f8cd4ebf073aeb867a046a9ba271060b9 not found: ID does not exist" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.313453 4803 scope.go:117] "RemoveContainer" containerID="0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683" Jan 27 22:13:27 crc kubenswrapper[4803]: E0127 22:13:27.316968 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683\": container with ID starting with 0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683 not found: ID does not exist" containerID="0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.317011 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683"} err="failed to get container status \"0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683\": rpc error: code = NotFound desc = could not find container \"0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683\": container with ID starting with 0b758ead9c1e17e5d048754a73a65d41be9f1f60635222c0a5ba5d2395f03683 not found: ID does not exist" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.369009 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-config-data\") pod \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.369140 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-combined-ca-bundle\") pod \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.369169 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkrrc\" (UniqueName: \"kubernetes.io/projected/42e5e272-92f3-43a7-8084-c7f1e697b9f3-kube-api-access-nkrrc\") pod \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\" (UID: \"42e5e272-92f3-43a7-8084-c7f1e697b9f3\") " Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.375295 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e5e272-92f3-43a7-8084-c7f1e697b9f3-kube-api-access-nkrrc" (OuterVolumeSpecName: "kube-api-access-nkrrc") pod "42e5e272-92f3-43a7-8084-c7f1e697b9f3" (UID: "42e5e272-92f3-43a7-8084-c7f1e697b9f3"). InnerVolumeSpecName "kube-api-access-nkrrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.419034 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-config-data" (OuterVolumeSpecName: "config-data") pod "42e5e272-92f3-43a7-8084-c7f1e697b9f3" (UID: "42e5e272-92f3-43a7-8084-c7f1e697b9f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.421137 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42e5e272-92f3-43a7-8084-c7f1e697b9f3" (UID: "42e5e272-92f3-43a7-8084-c7f1e697b9f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.472395 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.472612 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e5e272-92f3-43a7-8084-c7f1e697b9f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.472691 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkrrc\" (UniqueName: \"kubernetes.io/projected/42e5e272-92f3-43a7-8084-c7f1e697b9f3-kube-api-access-nkrrc\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.551254 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.562462 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.584420 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:27 crc kubenswrapper[4803]: E0127 22:13:27.585181 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e5e272-92f3-43a7-8084-c7f1e697b9f3" containerName="nova-scheduler-scheduler" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.585206 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e5e272-92f3-43a7-8084-c7f1e697b9f3" containerName="nova-scheduler-scheduler" Jan 27 22:13:27 crc kubenswrapper[4803]: E0127 22:13:27.585249 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerName="nova-api-log" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.585260 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerName="nova-api-log" Jan 27 22:13:27 crc kubenswrapper[4803]: E0127 22:13:27.585284 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerName="nova-api-api" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.585292 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerName="nova-api-api" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.585581 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e5e272-92f3-43a7-8084-c7f1e697b9f3" containerName="nova-scheduler-scheduler" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.585607 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerName="nova-api-log" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.585626 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6ab0f0-3da9-4249-8225-3da652f4af33" containerName="nova-api-api" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.586957 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.590715 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.597089 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.676942 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.677104 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn4lc\" (UniqueName: \"kubernetes.io/projected/eb00ca76-3437-43a0-ada9-1a37c535137c-kube-api-access-hn4lc\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.677547 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-config-data\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.677750 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb00ca76-3437-43a0-ada9-1a37c535137c-logs\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.780541 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.780610 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn4lc\" (UniqueName: \"kubernetes.io/projected/eb00ca76-3437-43a0-ada9-1a37c535137c-kube-api-access-hn4lc\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.780746 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-config-data\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.780832 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb00ca76-3437-43a0-ada9-1a37c535137c-logs\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.781543 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb00ca76-3437-43a0-ada9-1a37c535137c-logs\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.801978 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-config-data\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.802252 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.813750 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn4lc\" (UniqueName: \"kubernetes.io/projected/eb00ca76-3437-43a0-ada9-1a37c535137c-kube-api-access-hn4lc\") pod \"nova-api-0\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " pod="openstack/nova-api-0" Jan 27 22:13:27 crc kubenswrapper[4803]: I0127 22:13:27.903193 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.169615 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.170231 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42e5e272-92f3-43a7-8084-c7f1e697b9f3","Type":"ContainerDied","Data":"6a004aa1005e2eccde99ff81b2332dd39fc565e5b266fe386f1239341e511792"} Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.170340 4803 scope.go:117] "RemoveContainer" containerID="5be5ddbef65b7836166874c8d20e5649ff430887f63d8a27d65b72f20702de39" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.227709 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.247026 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.259697 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.272214 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.276492 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.285106 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.373659 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:13:28 crc kubenswrapper[4803]: E0127 22:13:28.378208 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.385287 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d6ab0f0-3da9-4249-8225-3da652f4af33" path="/var/lib/kubelet/pods/0d6ab0f0-3da9-4249-8225-3da652f4af33/volumes" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.386789 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e5e272-92f3-43a7-8084-c7f1e697b9f3" path="/var/lib/kubelet/pods/42e5e272-92f3-43a7-8084-c7f1e697b9f3/volumes" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.397165 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdmgx\" (UniqueName: \"kubernetes.io/projected/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-kube-api-access-hdmgx\") pod \"nova-scheduler-0\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.397247 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-config-data\") pod \"nova-scheduler-0\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.397512 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.430448 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.499411 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdmgx\" (UniqueName: \"kubernetes.io/projected/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-kube-api-access-hdmgx\") pod \"nova-scheduler-0\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.499496 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-config-data\") pod \"nova-scheduler-0\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.499565 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.518303 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-config-data\") pod \"nova-scheduler-0\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.529431 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.538298 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdmgx\" (UniqueName: \"kubernetes.io/projected/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-kube-api-access-hdmgx\") pod \"nova-scheduler-0\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.578402 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-q7tp4"] Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.580001 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.585101 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.585497 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-vtwk7" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.585583 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.585651 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.590240 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-q7tp4"] Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.619459 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.712396 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbvqk\" (UniqueName: \"kubernetes.io/projected/489ecf39-a12d-47b3-8f74-eb20ea68f519-kube-api-access-qbvqk\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.712734 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-combined-ca-bundle\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.712873 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-scripts\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.713023 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-config-data\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.814830 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-combined-ca-bundle\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.814920 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-scripts\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.814959 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-config-data\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.815053 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbvqk\" (UniqueName: \"kubernetes.io/projected/489ecf39-a12d-47b3-8f74-eb20ea68f519-kube-api-access-qbvqk\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.821535 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-combined-ca-bundle\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.821758 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-config-data\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.824152 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-scripts\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:28 crc kubenswrapper[4803]: I0127 22:13:28.836529 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbvqk\" (UniqueName: \"kubernetes.io/projected/489ecf39-a12d-47b3-8f74-eb20ea68f519-kube-api-access-qbvqk\") pod \"aodh-db-sync-q7tp4\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:29 crc kubenswrapper[4803]: I0127 22:13:29.001414 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:29 crc kubenswrapper[4803]: I0127 22:13:29.206044 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb00ca76-3437-43a0-ada9-1a37c535137c","Type":"ContainerStarted","Data":"66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030"} Jan 27 22:13:29 crc kubenswrapper[4803]: I0127 22:13:29.206346 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb00ca76-3437-43a0-ada9-1a37c535137c","Type":"ContainerStarted","Data":"dfe638f65cadbca3aa0dfcaa3de0f76d45265029e9ec266d404842dd48068ccf"} Jan 27 22:13:29 crc kubenswrapper[4803]: I0127 22:13:29.215451 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j8bpn" podUID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" containerName="registry-server" containerID="cri-o://5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700" gracePeriod=2 Jan 27 22:13:29 crc kubenswrapper[4803]: I0127 22:13:29.307356 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:13:29 crc kubenswrapper[4803]: I0127 22:13:29.686719 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-q7tp4"] Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.085091 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.235556 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb00ca76-3437-43a0-ada9-1a37c535137c","Type":"ContainerStarted","Data":"8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0"} Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.240523 4803 generic.go:334] "Generic (PLEG): container finished" podID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" containerID="5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700" exitCode=0 Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.240596 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8bpn" event={"ID":"58b89810-ff58-4c6f-941a-9c3c85bb8f5f","Type":"ContainerDied","Data":"5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700"} Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.240624 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8bpn" event={"ID":"58b89810-ff58-4c6f-941a-9c3c85bb8f5f","Type":"ContainerDied","Data":"d43f6efde41ffb268c741c66ce08eb38b013cc38d7526f9543dbbbfa4d207692"} Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.240649 4803 scope.go:117] "RemoveContainer" containerID="5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.240788 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8bpn" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.256454 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q7tp4" event={"ID":"489ecf39-a12d-47b3-8f74-eb20ea68f519","Type":"ContainerStarted","Data":"344e30c41d907462bd25e39957392acbf9b5f52e2fcc46eccf6799b135000834"} Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.258136 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g69w9\" (UniqueName: \"kubernetes.io/projected/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-kube-api-access-g69w9\") pod \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.258395 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-catalog-content\") pod \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.258537 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-utilities\") pod \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\" (UID: \"58b89810-ff58-4c6f-941a-9c3c85bb8f5f\") " Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.259573 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-utilities" (OuterVolumeSpecName: "utilities") pod "58b89810-ff58-4c6f-941a-9c3c85bb8f5f" (UID: "58b89810-ff58-4c6f-941a-9c3c85bb8f5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.265099 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-kube-api-access-g69w9" (OuterVolumeSpecName: "kube-api-access-g69w9") pod "58b89810-ff58-4c6f-941a-9c3c85bb8f5f" (UID: "58b89810-ff58-4c6f-941a-9c3c85bb8f5f"). InnerVolumeSpecName "kube-api-access-g69w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.266767 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da532001-f5a2-4f8d-99ca-c2b8b35fd77a","Type":"ContainerStarted","Data":"87b939a98ee9f41caf682a9438652bbea98de5063a0ce0f39648ae86da827980"} Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.266814 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da532001-f5a2-4f8d-99ca-c2b8b35fd77a","Type":"ContainerStarted","Data":"2e20810ba6c239212d7b3f018e23561a53ae04117b0d4799b2aa9575f0dc2723"} Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.277052 4803 scope.go:117] "RemoveContainer" containerID="d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.286826 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.28680598 podStartE2EDuration="3.28680598s" podCreationTimestamp="2026-01-27 22:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:13:30.256765011 +0000 UTC m=+1562.672786710" watchObservedRunningTime="2026-01-27 22:13:30.28680598 +0000 UTC m=+1562.702827679" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.291799 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.291789074 podStartE2EDuration="2.291789074s" podCreationTimestamp="2026-01-27 22:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:13:30.280440279 +0000 UTC m=+1562.696461978" watchObservedRunningTime="2026-01-27 22:13:30.291789074 +0000 UTC m=+1562.707810763" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.314062 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58b89810-ff58-4c6f-941a-9c3c85bb8f5f" (UID: "58b89810-ff58-4c6f-941a-9c3c85bb8f5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.316080 4803 scope.go:117] "RemoveContainer" containerID="c4674eb2e0d5192822d2839c4b768a40c93c9d89d04cb83120891619d59121f2" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.338993 4803 scope.go:117] "RemoveContainer" containerID="5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700" Jan 27 22:13:30 crc kubenswrapper[4803]: E0127 22:13:30.339437 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700\": container with ID starting with 5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700 not found: ID does not exist" containerID="5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.339468 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700"} err="failed to get container status \"5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700\": rpc error: code = NotFound desc = could not find container \"5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700\": container with ID starting with 5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700 not found: ID does not exist" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.339487 4803 scope.go:117] "RemoveContainer" containerID="d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9" Jan 27 22:13:30 crc kubenswrapper[4803]: E0127 22:13:30.339891 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9\": container with ID starting with d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9 not found: ID does not exist" containerID="d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.339908 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9"} err="failed to get container status \"d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9\": rpc error: code = NotFound desc = could not find container \"d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9\": container with ID starting with d736504afa1fc3b598714d059965f0ce814cec64726d634fcd42be9193dc52e9 not found: ID does not exist" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.339920 4803 scope.go:117] "RemoveContainer" containerID="c4674eb2e0d5192822d2839c4b768a40c93c9d89d04cb83120891619d59121f2" Jan 27 22:13:30 crc kubenswrapper[4803]: E0127 22:13:30.340152 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4674eb2e0d5192822d2839c4b768a40c93c9d89d04cb83120891619d59121f2\": container with ID starting with c4674eb2e0d5192822d2839c4b768a40c93c9d89d04cb83120891619d59121f2 not found: ID does not exist" containerID="c4674eb2e0d5192822d2839c4b768a40c93c9d89d04cb83120891619d59121f2" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.340167 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4674eb2e0d5192822d2839c4b768a40c93c9d89d04cb83120891619d59121f2"} err="failed to get container status \"c4674eb2e0d5192822d2839c4b768a40c93c9d89d04cb83120891619d59121f2\": rpc error: code = NotFound desc = could not find container \"c4674eb2e0d5192822d2839c4b768a40c93c9d89d04cb83120891619d59121f2\": container with ID starting with c4674eb2e0d5192822d2839c4b768a40c93c9d89d04cb83120891619d59121f2 not found: ID does not exist" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.361477 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.361513 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g69w9\" (UniqueName: \"kubernetes.io/projected/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-kube-api-access-g69w9\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.361528 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b89810-ff58-4c6f-941a-9c3c85bb8f5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.569005 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8bpn"] Jan 27 22:13:30 crc kubenswrapper[4803]: I0127 22:13:30.597327 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j8bpn"] Jan 27 22:13:32 crc kubenswrapper[4803]: I0127 22:13:32.325178 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" path="/var/lib/kubelet/pods/58b89810-ff58-4c6f-941a-9c3c85bb8f5f/volumes" Jan 27 22:13:33 crc kubenswrapper[4803]: I0127 22:13:33.620208 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 22:13:34 crc kubenswrapper[4803]: I0127 22:13:34.372545 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 22:13:35 crc kubenswrapper[4803]: I0127 22:13:35.346249 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q7tp4" event={"ID":"489ecf39-a12d-47b3-8f74-eb20ea68f519","Type":"ContainerStarted","Data":"6a35bc65ea7413b3f9d28be260d0207a58d3d2dd4e4cb380c807128058befc91"} Jan 27 22:13:35 crc kubenswrapper[4803]: I0127 22:13:35.395894 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-q7tp4" podStartSLOduration=2.9922493770000003 podStartE2EDuration="7.395810402s" podCreationTimestamp="2026-01-27 22:13:28 +0000 UTC" firstStartedPulling="2026-01-27 22:13:29.701187807 +0000 UTC m=+1562.117209506" lastFinishedPulling="2026-01-27 22:13:34.104748842 +0000 UTC m=+1566.520770531" observedRunningTime="2026-01-27 22:13:35.384092787 +0000 UTC m=+1567.800114486" watchObservedRunningTime="2026-01-27 22:13:35.395810402 +0000 UTC m=+1567.811832111" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.339107 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hd7bp"] Jan 27 22:13:37 crc kubenswrapper[4803]: E0127 22:13:37.339938 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" containerName="registry-server" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.339952 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" containerName="registry-server" Jan 27 22:13:37 crc kubenswrapper[4803]: E0127 22:13:37.339992 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" containerName="extract-utilities" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.340000 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" containerName="extract-utilities" Jan 27 22:13:37 crc kubenswrapper[4803]: E0127 22:13:37.340024 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" containerName="extract-content" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.340030 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" containerName="extract-content" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.340239 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b89810-ff58-4c6f-941a-9c3c85bb8f5f" containerName="registry-server" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.342163 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.364348 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hd7bp"] Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.370863 4803 generic.go:334] "Generic (PLEG): container finished" podID="489ecf39-a12d-47b3-8f74-eb20ea68f519" containerID="6a35bc65ea7413b3f9d28be260d0207a58d3d2dd4e4cb380c807128058befc91" exitCode=0 Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.370895 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q7tp4" event={"ID":"489ecf39-a12d-47b3-8f74-eb20ea68f519","Type":"ContainerDied","Data":"6a35bc65ea7413b3f9d28be260d0207a58d3d2dd4e4cb380c807128058befc91"} Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.432727 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-utilities\") pod \"redhat-marketplace-hd7bp\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.432778 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-catalog-content\") pod \"redhat-marketplace-hd7bp\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.432953 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxdg2\" (UniqueName: \"kubernetes.io/projected/78144907-f957-40fb-a2f5-c95fe6c56ae7-kube-api-access-hxdg2\") pod \"redhat-marketplace-hd7bp\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.535783 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxdg2\" (UniqueName: \"kubernetes.io/projected/78144907-f957-40fb-a2f5-c95fe6c56ae7-kube-api-access-hxdg2\") pod \"redhat-marketplace-hd7bp\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.536065 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-utilities\") pod \"redhat-marketplace-hd7bp\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.536119 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-catalog-content\") pod \"redhat-marketplace-hd7bp\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.536578 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-utilities\") pod \"redhat-marketplace-hd7bp\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.536704 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-catalog-content\") pod \"redhat-marketplace-hd7bp\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.555811 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxdg2\" (UniqueName: \"kubernetes.io/projected/78144907-f957-40fb-a2f5-c95fe6c56ae7-kube-api-access-hxdg2\") pod \"redhat-marketplace-hd7bp\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.665982 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.904043 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 22:13:37 crc kubenswrapper[4803]: I0127 22:13:37.904404 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.206498 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hd7bp"] Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.382986 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd7bp" event={"ID":"78144907-f957-40fb-a2f5-c95fe6c56ae7","Type":"ContainerStarted","Data":"e44fdb8958a2e7f2bddcce7d2ccb962bddf7ac79191273202f4a0d93b49edead"} Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.622130 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.674764 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.824748 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.966088 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-combined-ca-bundle\") pod \"489ecf39-a12d-47b3-8f74-eb20ea68f519\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.966285 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-scripts\") pod \"489ecf39-a12d-47b3-8f74-eb20ea68f519\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.966366 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbvqk\" (UniqueName: \"kubernetes.io/projected/489ecf39-a12d-47b3-8f74-eb20ea68f519-kube-api-access-qbvqk\") pod \"489ecf39-a12d-47b3-8f74-eb20ea68f519\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.966436 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-config-data\") pod \"489ecf39-a12d-47b3-8f74-eb20ea68f519\" (UID: \"489ecf39-a12d-47b3-8f74-eb20ea68f519\") " Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.974588 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489ecf39-a12d-47b3-8f74-eb20ea68f519-kube-api-access-qbvqk" (OuterVolumeSpecName: "kube-api-access-qbvqk") pod "489ecf39-a12d-47b3-8f74-eb20ea68f519" (UID: "489ecf39-a12d-47b3-8f74-eb20ea68f519"). InnerVolumeSpecName "kube-api-access-qbvqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.985088 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.987141 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 22:13:38 crc kubenswrapper[4803]: I0127 22:13:38.994310 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-scripts" (OuterVolumeSpecName: "scripts") pod "489ecf39-a12d-47b3-8f74-eb20ea68f519" (UID: "489ecf39-a12d-47b3-8f74-eb20ea68f519"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.008812 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-config-data" (OuterVolumeSpecName: "config-data") pod "489ecf39-a12d-47b3-8f74-eb20ea68f519" (UID: "489ecf39-a12d-47b3-8f74-eb20ea68f519"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.014050 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "489ecf39-a12d-47b3-8f74-eb20ea68f519" (UID: "489ecf39-a12d-47b3-8f74-eb20ea68f519"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.069848 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.069931 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.069943 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbvqk\" (UniqueName: \"kubernetes.io/projected/489ecf39-a12d-47b3-8f74-eb20ea68f519-kube-api-access-qbvqk\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.069958 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489ecf39-a12d-47b3-8f74-eb20ea68f519-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.396912 4803 generic.go:334] "Generic (PLEG): container finished" podID="78144907-f957-40fb-a2f5-c95fe6c56ae7" containerID="37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07" exitCode=0 Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.396992 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd7bp" event={"ID":"78144907-f957-40fb-a2f5-c95fe6c56ae7","Type":"ContainerDied","Data":"37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07"} Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.401181 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q7tp4" Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.401325 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q7tp4" event={"ID":"489ecf39-a12d-47b3-8f74-eb20ea68f519","Type":"ContainerDied","Data":"344e30c41d907462bd25e39957392acbf9b5f52e2fcc46eccf6799b135000834"} Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.401369 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="344e30c41d907462bd25e39957392acbf9b5f52e2fcc46eccf6799b135000834" Jan 27 22:13:39 crc kubenswrapper[4803]: I0127 22:13:39.457558 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 22:13:40 crc kubenswrapper[4803]: I0127 22:13:40.307358 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:13:40 crc kubenswrapper[4803]: E0127 22:13:40.308110 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:13:42 crc kubenswrapper[4803]: I0127 22:13:42.440827 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd7bp" event={"ID":"78144907-f957-40fb-a2f5-c95fe6c56ae7","Type":"ContainerStarted","Data":"444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7"} Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.452749 4803 generic.go:334] "Generic (PLEG): container finished" podID="78144907-f957-40fb-a2f5-c95fe6c56ae7" containerID="444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7" exitCode=0 Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.452805 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd7bp" event={"ID":"78144907-f957-40fb-a2f5-c95fe6c56ae7","Type":"ContainerDied","Data":"444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7"} Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.783137 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 22:13:43 crc kubenswrapper[4803]: E0127 22:13:43.799061 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ecf39-a12d-47b3-8f74-eb20ea68f519" containerName="aodh-db-sync" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.799094 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ecf39-a12d-47b3-8f74-eb20ea68f519" containerName="aodh-db-sync" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.799428 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ecf39-a12d-47b3-8f74-eb20ea68f519" containerName="aodh-db-sync" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.802086 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.805244 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.809292 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.813514 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-vtwk7" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.814038 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.879293 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-config-data\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.879743 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-scripts\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.879983 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cj8l\" (UniqueName: \"kubernetes.io/projected/e3bc9280-942c-487a-85ef-3da17fa151ba-kube-api-access-5cj8l\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.880051 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.981574 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cj8l\" (UniqueName: \"kubernetes.io/projected/e3bc9280-942c-487a-85ef-3da17fa151ba-kube-api-access-5cj8l\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.981663 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.981703 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-config-data\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.981760 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-scripts\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.987814 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:43 crc kubenswrapper[4803]: I0127 22:13:43.990823 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-config-data\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:44 crc kubenswrapper[4803]: I0127 22:13:44.013585 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-scripts\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:44 crc kubenswrapper[4803]: I0127 22:13:44.014360 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cj8l\" (UniqueName: \"kubernetes.io/projected/e3bc9280-942c-487a-85ef-3da17fa151ba-kube-api-access-5cj8l\") pod \"aodh-0\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " pod="openstack/aodh-0" Jan 27 22:13:44 crc kubenswrapper[4803]: I0127 22:13:44.131660 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 22:13:44 crc kubenswrapper[4803]: W0127 22:13:44.358011 4803 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78144907_f957_40fb_a2f5_c95fe6c56ae7.slice/crio-conmon-37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78144907_f957_40fb_a2f5_c95fe6c56ae7.slice/crio-conmon-37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07.scope: no such file or directory Jan 27 22:13:44 crc kubenswrapper[4803]: W0127 22:13:44.358332 4803 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78144907_f957_40fb_a2f5_c95fe6c56ae7.slice/crio-37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78144907_f957_40fb_a2f5_c95fe6c56ae7.slice/crio-37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07.scope: no such file or directory Jan 27 22:13:44 crc kubenswrapper[4803]: W0127 22:13:44.360826 4803 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78144907_f957_40fb_a2f5_c95fe6c56ae7.slice/crio-conmon-444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78144907_f957_40fb_a2f5_c95fe6c56ae7.slice/crio-conmon-444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7.scope: no such file or directory Jan 27 22:13:44 crc kubenswrapper[4803]: W0127 22:13:44.360904 4803 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78144907_f957_40fb_a2f5_c95fe6c56ae7.slice/crio-444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78144907_f957_40fb_a2f5_c95fe6c56ae7.slice/crio-444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7.scope: no such file or directory Jan 27 22:13:44 crc kubenswrapper[4803]: I0127 22:13:44.488000 4803 generic.go:334] "Generic (PLEG): container finished" podID="3e35b785-4c7d-4677-bd3c-8642931036c0" containerID="91e6f87023c54ff05031e27b2e720b8d5d7fbd7b9e15e7132d1c3c580fe5a30d" exitCode=137 Jan 27 22:13:44 crc kubenswrapper[4803]: I0127 22:13:44.488079 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e35b785-4c7d-4677-bd3c-8642931036c0","Type":"ContainerDied","Data":"91e6f87023c54ff05031e27b2e720b8d5d7fbd7b9e15e7132d1c3c580fe5a30d"} Jan 27 22:13:44 crc kubenswrapper[4803]: I0127 22:13:44.495805 4803 generic.go:334] "Generic (PLEG): container finished" podID="1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" containerID="d2137644e16498ed9498042acf081b4e24799d067312a5dae03f4d1d622921ad" exitCode=137 Jan 27 22:13:44 crc kubenswrapper[4803]: I0127 22:13:44.495883 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc","Type":"ContainerDied","Data":"d2137644e16498ed9498042acf081b4e24799d067312a5dae03f4d1d622921ad"} Jan 27 22:13:44 crc kubenswrapper[4803]: E0127 22:13:44.680192 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58b89810_ff58_4c6f_941a_9c3c85bb8f5f.slice/crio-5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e35b785_4c7d_4677_bd3c_8642931036c0.slice/crio-91e6f87023c54ff05031e27b2e720b8d5d7fbd7b9e15e7132d1c3c580fe5a30d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58b89810_ff58_4c6f_941a_9c3c85bb8f5f.slice/crio-d43f6efde41ffb268c741c66ce08eb38b013cc38d7526f9543dbbbfa4d207692\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e7d2d7f_1984_4281_8ea0_5d1db8a03edc.slice/crio-conmon-d2137644e16498ed9498042acf081b4e24799d067312a5dae03f4d1d622921ad.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58b89810_ff58_4c6f_941a_9c3c85bb8f5f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e35b785_4c7d_4677_bd3c_8642931036c0.slice/crio-conmon-91e6f87023c54ff05031e27b2e720b8d5d7fbd7b9e15e7132d1c3c580fe5a30d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58b89810_ff58_4c6f_941a_9c3c85bb8f5f.slice/crio-conmon-5ee0bafedd48b9b2fcc09bff922a9a112360958b6edbcf042ffeb6c9b079a700.scope\": RecentStats: unable to find data in memory cache]" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.099324 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 22:13:45 crc kubenswrapper[4803]: W0127 22:13:45.135282 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3bc9280_942c_487a_85ef_3da17fa151ba.slice/crio-ff17b9084cd24ee86f9a9d9e66f7f6fc037972b6ef3d2b9cdf39c4f162eecc7f WatchSource:0}: Error finding container ff17b9084cd24ee86f9a9d9e66f7f6fc037972b6ef3d2b9cdf39c4f162eecc7f: Status 404 returned error can't find the container with id ff17b9084cd24ee86f9a9d9e66f7f6fc037972b6ef3d2b9cdf39c4f162eecc7f Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.518795 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd7bp" event={"ID":"78144907-f957-40fb-a2f5-c95fe6c56ae7","Type":"ContainerStarted","Data":"5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0"} Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.521588 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e3bc9280-942c-487a-85ef-3da17fa151ba","Type":"ContainerStarted","Data":"ff17b9084cd24ee86f9a9d9e66f7f6fc037972b6ef3d2b9cdf39c4f162eecc7f"} Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.524330 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e35b785-4c7d-4677-bd3c-8642931036c0","Type":"ContainerDied","Data":"a40d2c9e7d89a55280e1287e16d7f29553f0fff170660d9e638cd3cde2b8bc56"} Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.524381 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a40d2c9e7d89a55280e1287e16d7f29553f0fff170660d9e638cd3cde2b8bc56" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.531098 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc","Type":"ContainerDied","Data":"255c392a8cc2639bc55ff09de69c714845d01f473a3f5720d1411ea89738a019"} Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.531136 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="255c392a8cc2639bc55ff09de69c714845d01f473a3f5720d1411ea89738a019" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.551963 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hd7bp" podStartSLOduration=4.116851418 podStartE2EDuration="8.551942641s" podCreationTimestamp="2026-01-27 22:13:37 +0000 UTC" firstStartedPulling="2026-01-27 22:13:39.401345681 +0000 UTC m=+1571.817367390" lastFinishedPulling="2026-01-27 22:13:43.836436914 +0000 UTC m=+1576.252458613" observedRunningTime="2026-01-27 22:13:45.544348626 +0000 UTC m=+1577.960370325" watchObservedRunningTime="2026-01-27 22:13:45.551942641 +0000 UTC m=+1577.967964340" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.571737 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.577395 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.631612 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-combined-ca-bundle\") pod \"3e35b785-4c7d-4677-bd3c-8642931036c0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.631686 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-combined-ca-bundle\") pod \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.631725 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-config-data\") pod \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.631897 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-logs\") pod \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.631964 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-config-data\") pod \"3e35b785-4c7d-4677-bd3c-8642931036c0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.632029 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp99n\" (UniqueName: \"kubernetes.io/projected/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-kube-api-access-kp99n\") pod \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\" (UID: \"1e7d2d7f-1984-4281-8ea0-5d1db8a03edc\") " Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.632143 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9dnt\" (UniqueName: \"kubernetes.io/projected/3e35b785-4c7d-4677-bd3c-8642931036c0-kube-api-access-w9dnt\") pod \"3e35b785-4c7d-4677-bd3c-8642931036c0\" (UID: \"3e35b785-4c7d-4677-bd3c-8642931036c0\") " Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.632921 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-logs" (OuterVolumeSpecName: "logs") pod "1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" (UID: "1e7d2d7f-1984-4281-8ea0-5d1db8a03edc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.644917 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-kube-api-access-kp99n" (OuterVolumeSpecName: "kube-api-access-kp99n") pod "1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" (UID: "1e7d2d7f-1984-4281-8ea0-5d1db8a03edc"). InnerVolumeSpecName "kube-api-access-kp99n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.651859 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e35b785-4c7d-4677-bd3c-8642931036c0-kube-api-access-w9dnt" (OuterVolumeSpecName: "kube-api-access-w9dnt") pod "3e35b785-4c7d-4677-bd3c-8642931036c0" (UID: "3e35b785-4c7d-4677-bd3c-8642931036c0"). InnerVolumeSpecName "kube-api-access-w9dnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.675380 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-config-data" (OuterVolumeSpecName: "config-data") pod "1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" (UID: "1e7d2d7f-1984-4281-8ea0-5d1db8a03edc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.677021 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-config-data" (OuterVolumeSpecName: "config-data") pod "3e35b785-4c7d-4677-bd3c-8642931036c0" (UID: "3e35b785-4c7d-4677-bd3c-8642931036c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.685491 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e35b785-4c7d-4677-bd3c-8642931036c0" (UID: "3e35b785-4c7d-4677-bd3c-8642931036c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.704639 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" (UID: "1e7d2d7f-1984-4281-8ea0-5d1db8a03edc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.735029 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9dnt\" (UniqueName: \"kubernetes.io/projected/3e35b785-4c7d-4677-bd3c-8642931036c0-kube-api-access-w9dnt\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.735057 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.735066 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.735076 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.735086 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.735094 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e35b785-4c7d-4677-bd3c-8642931036c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:45 crc kubenswrapper[4803]: I0127 22:13:45.735105 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp99n\" (UniqueName: \"kubernetes.io/projected/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc-kube-api-access-kp99n\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.544526 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e3bc9280-942c-487a-85ef-3da17fa151ba","Type":"ContainerStarted","Data":"0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6"} Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.544544 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.544560 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.580898 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.595860 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.607596 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.621985 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.634373 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:13:46 crc kubenswrapper[4803]: E0127 22:13:46.634923 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" containerName="nova-metadata-metadata" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.634945 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" containerName="nova-metadata-metadata" Jan 27 22:13:46 crc kubenswrapper[4803]: E0127 22:13:46.634961 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e35b785-4c7d-4677-bd3c-8642931036c0" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.634968 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e35b785-4c7d-4677-bd3c-8642931036c0" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 22:13:46 crc kubenswrapper[4803]: E0127 22:13:46.634996 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" containerName="nova-metadata-log" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.635003 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" containerName="nova-metadata-log" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.635398 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" containerName="nova-metadata-log" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.635422 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e35b785-4c7d-4677-bd3c-8642931036c0" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.635444 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" containerName="nova-metadata-metadata" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.636811 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.645595 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.649242 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.649489 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.662189 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.664819 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.667368 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.667648 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.667796 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.683026 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.758610 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-config-data\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.761261 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gpgc\" (UniqueName: \"kubernetes.io/projected/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-kube-api-access-7gpgc\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.761431 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dchpg\" (UniqueName: \"kubernetes.io/projected/a06bce4f-9283-47de-bf15-5b1ae229961e-kube-api-access-dchpg\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.761540 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.761621 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.761683 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.761820 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.761935 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-logs\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.762017 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.762117 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.864548 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.864616 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.864639 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.864768 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.864824 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-logs\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.864876 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.864936 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.865030 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-config-data\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.865055 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gpgc\" (UniqueName: \"kubernetes.io/projected/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-kube-api-access-7gpgc\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.865166 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dchpg\" (UniqueName: \"kubernetes.io/projected/a06bce4f-9283-47de-bf15-5b1ae229961e-kube-api-access-dchpg\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.866305 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-logs\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.881835 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.883377 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.883891 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-config-data\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.886137 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.889340 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.891505 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gpgc\" (UniqueName: \"kubernetes.io/projected/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-kube-api-access-7gpgc\") pod \"nova-metadata-0\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.892763 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dchpg\" (UniqueName: \"kubernetes.io/projected/a06bce4f-9283-47de-bf15-5b1ae229961e-kube-api-access-dchpg\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.892763 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.911379 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a06bce4f-9283-47de-bf15-5b1ae229961e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a06bce4f-9283-47de-bf15-5b1ae229961e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.969871 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 22:13:46 crc kubenswrapper[4803]: I0127 22:13:46.998557 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.343271 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.646192 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.659373 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.666411 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.667598 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.734467 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:47 crc kubenswrapper[4803]: W0127 22:13:47.834623 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda06bce4f_9283_47de_bf15_5b1ae229961e.slice/crio-83356f9de253e40d8795e7518095a27d36e40a04133dc8d30b111520bd2bbdf2 WatchSource:0}: Error finding container 83356f9de253e40d8795e7518095a27d36e40a04133dc8d30b111520bd2bbdf2: Status 404 returned error can't find the container with id 83356f9de253e40d8795e7518095a27d36e40a04133dc8d30b111520bd2bbdf2 Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.835373 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.835719 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="ceilometer-central-agent" containerID="cri-o://19a49f108ef5c15b9d40503cacc12835e41a96b2636bcb2fbdd620c587609951" gracePeriod=30 Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.835797 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="ceilometer-notification-agent" containerID="cri-o://e1e601996a5a07fc2678c0d069c16fc1022fb0b1a337fa336574dcbb61f94b10" gracePeriod=30 Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.835765 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="proxy-httpd" containerID="cri-o://eb63e6cdb7c9855f9c905c6505991397a9175f1ecf4d6b21005c307ddaa2f4fa" gracePeriod=30 Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.835753 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="sg-core" containerID="cri-o://7d8ea161a76fa2937f6217974a3a743f9e77fdab0d7b1fbf3a94ac21075ee3e3" gracePeriod=30 Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.846944 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.248:3000/\": EOF" Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.907007 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.907341 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.907878 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.908332 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.910091 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 22:13:47 crc kubenswrapper[4803]: I0127 22:13:47.910276 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.033937 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.248:3000/\": dial tcp 10.217.0.248:3000: connect: connection refused" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.128938 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-cbgct"] Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.132364 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.179972 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-cbgct"] Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.255287 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.255449 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.255477 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.255518 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.255540 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-config\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.255602 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frflx\" (UniqueName: \"kubernetes.io/projected/29833af4-166d-4666-a071-f3f7e0d4ac91-kube-api-access-frflx\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.323926 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e7d2d7f-1984-4281-8ea0-5d1db8a03edc" path="/var/lib/kubelet/pods/1e7d2d7f-1984-4281-8ea0-5d1db8a03edc/volumes" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.324532 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e35b785-4c7d-4677-bd3c-8642931036c0" path="/var/lib/kubelet/pods/3e35b785-4c7d-4677-bd3c-8642931036c0/volumes" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.363796 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.363966 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.364063 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.364101 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-config\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.364262 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frflx\" (UniqueName: \"kubernetes.io/projected/29833af4-166d-4666-a071-f3f7e0d4ac91-kube-api-access-frflx\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.364335 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.365437 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.365997 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.366035 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-config\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.366656 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.367267 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.407829 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frflx\" (UniqueName: \"kubernetes.io/projected/29833af4-166d-4666-a071-f3f7e0d4ac91-kube-api-access-frflx\") pod \"dnsmasq-dns-f84f9ccf-cbgct\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.590997 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.637748 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c","Type":"ContainerStarted","Data":"b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c"} Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.637812 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c","Type":"ContainerStarted","Data":"4c05dab4c0c4f31e310542389c8cd8fc89323f246ec152e38c3f00ea3499d54a"} Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.641386 4803 generic.go:334] "Generic (PLEG): container finished" podID="d41adc57-850e-4967-aa19-042dc8e991f9" containerID="eb63e6cdb7c9855f9c905c6505991397a9175f1ecf4d6b21005c307ddaa2f4fa" exitCode=0 Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.641418 4803 generic.go:334] "Generic (PLEG): container finished" podID="d41adc57-850e-4967-aa19-042dc8e991f9" containerID="7d8ea161a76fa2937f6217974a3a743f9e77fdab0d7b1fbf3a94ac21075ee3e3" exitCode=2 Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.641432 4803 generic.go:334] "Generic (PLEG): container finished" podID="d41adc57-850e-4967-aa19-042dc8e991f9" containerID="19a49f108ef5c15b9d40503cacc12835e41a96b2636bcb2fbdd620c587609951" exitCode=0 Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.641478 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41adc57-850e-4967-aa19-042dc8e991f9","Type":"ContainerDied","Data":"eb63e6cdb7c9855f9c905c6505991397a9175f1ecf4d6b21005c307ddaa2f4fa"} Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.641502 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41adc57-850e-4967-aa19-042dc8e991f9","Type":"ContainerDied","Data":"7d8ea161a76fa2937f6217974a3a743f9e77fdab0d7b1fbf3a94ac21075ee3e3"} Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.641517 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41adc57-850e-4967-aa19-042dc8e991f9","Type":"ContainerDied","Data":"19a49f108ef5c15b9d40503cacc12835e41a96b2636bcb2fbdd620c587609951"} Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.647589 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e3bc9280-942c-487a-85ef-3da17fa151ba","Type":"ContainerStarted","Data":"ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d"} Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.650958 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a06bce4f-9283-47de-bf15-5b1ae229961e","Type":"ContainerStarted","Data":"fcfbabaed3d4ec92fe610013c3c32c0ec3d87cb3d9781442aaac5dc23565da4a"} Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.650999 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a06bce4f-9283-47de-bf15-5b1ae229961e","Type":"ContainerStarted","Data":"83356f9de253e40d8795e7518095a27d36e40a04133dc8d30b111520bd2bbdf2"} Jan 27 22:13:48 crc kubenswrapper[4803]: I0127 22:13:48.679796 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.67977789 podStartE2EDuration="2.67977789s" podCreationTimestamp="2026-01-27 22:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:13:48.665701861 +0000 UTC m=+1581.081723570" watchObservedRunningTime="2026-01-27 22:13:48.67977789 +0000 UTC m=+1581.095799589" Jan 27 22:13:49 crc kubenswrapper[4803]: I0127 22:13:49.234539 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-cbgct"] Jan 27 22:13:49 crc kubenswrapper[4803]: W0127 22:13:49.273651 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29833af4_166d_4666_a071_f3f7e0d4ac91.slice/crio-ab3056df4df99f6bd64273a88c4e52fbb88bc5634fe25e5e6d9d833ed30fdaf9 WatchSource:0}: Error finding container ab3056df4df99f6bd64273a88c4e52fbb88bc5634fe25e5e6d9d833ed30fdaf9: Status 404 returned error can't find the container with id ab3056df4df99f6bd64273a88c4e52fbb88bc5634fe25e5e6d9d833ed30fdaf9 Jan 27 22:13:49 crc kubenswrapper[4803]: I0127 22:13:49.662242 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" event={"ID":"29833af4-166d-4666-a071-f3f7e0d4ac91","Type":"ContainerStarted","Data":"ab3056df4df99f6bd64273a88c4e52fbb88bc5634fe25e5e6d9d833ed30fdaf9"} Jan 27 22:13:49 crc kubenswrapper[4803]: I0127 22:13:49.665639 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c","Type":"ContainerStarted","Data":"bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1"} Jan 27 22:13:49 crc kubenswrapper[4803]: I0127 22:13:49.701042 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.701020751 podStartE2EDuration="3.701020751s" podCreationTimestamp="2026-01-27 22:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:13:49.6946628 +0000 UTC m=+1582.110684499" watchObservedRunningTime="2026-01-27 22:13:49.701020751 +0000 UTC m=+1582.117042460" Jan 27 22:13:49 crc kubenswrapper[4803]: I0127 22:13:49.771896 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:49 crc kubenswrapper[4803]: I0127 22:13:49.858316 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hd7bp"] Jan 27 22:13:50 crc kubenswrapper[4803]: I0127 22:13:50.702629 4803 generic.go:334] "Generic (PLEG): container finished" podID="29833af4-166d-4666-a071-f3f7e0d4ac91" containerID="72a17f8b5b7d75ac535f934e01ea45626ab430a1019bdc78df05658d48cc9891" exitCode=0 Jan 27 22:13:50 crc kubenswrapper[4803]: I0127 22:13:50.702697 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" event={"ID":"29833af4-166d-4666-a071-f3f7e0d4ac91","Type":"ContainerDied","Data":"72a17f8b5b7d75ac535f934e01ea45626ab430a1019bdc78df05658d48cc9891"} Jan 27 22:13:50 crc kubenswrapper[4803]: I0127 22:13:50.712379 4803 generic.go:334] "Generic (PLEG): container finished" podID="d41adc57-850e-4967-aa19-042dc8e991f9" containerID="e1e601996a5a07fc2678c0d069c16fc1022fb0b1a337fa336574dcbb61f94b10" exitCode=0 Jan 27 22:13:50 crc kubenswrapper[4803]: I0127 22:13:50.713381 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41adc57-850e-4967-aa19-042dc8e991f9","Type":"ContainerDied","Data":"e1e601996a5a07fc2678c0d069c16fc1022fb0b1a337fa336574dcbb61f94b10"} Jan 27 22:13:50 crc kubenswrapper[4803]: I0127 22:13:50.735734 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:50 crc kubenswrapper[4803]: I0127 22:13:50.735971 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerName="nova-api-log" containerID="cri-o://66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030" gracePeriod=30 Jan 27 22:13:50 crc kubenswrapper[4803]: I0127 22:13:50.736192 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerName="nova-api-api" containerID="cri-o://8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0" gracePeriod=30 Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.121739 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.284582 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-config-data\") pod \"d41adc57-850e-4967-aa19-042dc8e991f9\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.284677 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-log-httpd\") pod \"d41adc57-850e-4967-aa19-042dc8e991f9\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.284745 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-sg-core-conf-yaml\") pod \"d41adc57-850e-4967-aa19-042dc8e991f9\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.284787 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4f5x\" (UniqueName: \"kubernetes.io/projected/d41adc57-850e-4967-aa19-042dc8e991f9-kube-api-access-q4f5x\") pod \"d41adc57-850e-4967-aa19-042dc8e991f9\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.284838 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-run-httpd\") pod \"d41adc57-850e-4967-aa19-042dc8e991f9\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.285043 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-scripts\") pod \"d41adc57-850e-4967-aa19-042dc8e991f9\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.285092 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-combined-ca-bundle\") pod \"d41adc57-850e-4967-aa19-042dc8e991f9\" (UID: \"d41adc57-850e-4967-aa19-042dc8e991f9\") " Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.285561 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d41adc57-850e-4967-aa19-042dc8e991f9" (UID: "d41adc57-850e-4967-aa19-042dc8e991f9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.285739 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d41adc57-850e-4967-aa19-042dc8e991f9" (UID: "d41adc57-850e-4967-aa19-042dc8e991f9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.286212 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.286234 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41adc57-850e-4967-aa19-042dc8e991f9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.293712 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-scripts" (OuterVolumeSpecName: "scripts") pod "d41adc57-850e-4967-aa19-042dc8e991f9" (UID: "d41adc57-850e-4967-aa19-042dc8e991f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.294437 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d41adc57-850e-4967-aa19-042dc8e991f9-kube-api-access-q4f5x" (OuterVolumeSpecName: "kube-api-access-q4f5x") pod "d41adc57-850e-4967-aa19-042dc8e991f9" (UID: "d41adc57-850e-4967-aa19-042dc8e991f9"). InnerVolumeSpecName "kube-api-access-q4f5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.308543 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:13:51 crc kubenswrapper[4803]: E0127 22:13:51.308914 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.348330 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d41adc57-850e-4967-aa19-042dc8e991f9" (UID: "d41adc57-850e-4967-aa19-042dc8e991f9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.389604 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.389637 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4f5x\" (UniqueName: \"kubernetes.io/projected/d41adc57-850e-4967-aa19-042dc8e991f9-kube-api-access-q4f5x\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.389649 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.425380 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d41adc57-850e-4967-aa19-042dc8e991f9" (UID: "d41adc57-850e-4967-aa19-042dc8e991f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.427774 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-config-data" (OuterVolumeSpecName: "config-data") pod "d41adc57-850e-4967-aa19-042dc8e991f9" (UID: "d41adc57-850e-4967-aa19-042dc8e991f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.501516 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.501567 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41adc57-850e-4967-aa19-042dc8e991f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.729020 4803 generic.go:334] "Generic (PLEG): container finished" podID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerID="66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030" exitCode=143 Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.729152 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb00ca76-3437-43a0-ada9-1a37c535137c","Type":"ContainerDied","Data":"66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030"} Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.735513 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41adc57-850e-4967-aa19-042dc8e991f9","Type":"ContainerDied","Data":"1393d63576fe013b2e3adc36449d0a020fe7d9784f03e16b3e8ce8c493ab565b"} Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.735548 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.735558 4803 scope.go:117] "RemoveContainer" containerID="eb63e6cdb7c9855f9c905c6505991397a9175f1ecf4d6b21005c307ddaa2f4fa" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.738164 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e3bc9280-942c-487a-85ef-3da17fa151ba","Type":"ContainerStarted","Data":"940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269"} Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.741025 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" event={"ID":"29833af4-166d-4666-a071-f3f7e0d4ac91","Type":"ContainerStarted","Data":"0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e"} Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.741076 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.741303 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hd7bp" podUID="78144907-f957-40fb-a2f5-c95fe6c56ae7" containerName="registry-server" containerID="cri-o://5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0" gracePeriod=2 Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.767510 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" podStartSLOduration=3.767488659 podStartE2EDuration="3.767488659s" podCreationTimestamp="2026-01-27 22:13:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:13:51.764425627 +0000 UTC m=+1584.180447326" watchObservedRunningTime="2026-01-27 22:13:51.767488659 +0000 UTC m=+1584.183510378" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.785810 4803 scope.go:117] "RemoveContainer" containerID="7d8ea161a76fa2937f6217974a3a743f9e77fdab0d7b1fbf3a94ac21075ee3e3" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.802108 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.833827 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.860963 4803 scope.go:117] "RemoveContainer" containerID="e1e601996a5a07fc2678c0d069c16fc1022fb0b1a337fa336574dcbb61f94b10" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.861712 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:51 crc kubenswrapper[4803]: E0127 22:13:51.862314 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="ceilometer-central-agent" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.862342 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="ceilometer-central-agent" Jan 27 22:13:51 crc kubenswrapper[4803]: E0127 22:13:51.862365 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="proxy-httpd" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.862376 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="proxy-httpd" Jan 27 22:13:51 crc kubenswrapper[4803]: E0127 22:13:51.862424 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="sg-core" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.862435 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="sg-core" Jan 27 22:13:51 crc kubenswrapper[4803]: E0127 22:13:51.862462 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="ceilometer-notification-agent" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.862472 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="ceilometer-notification-agent" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.862952 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="proxy-httpd" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.862998 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="sg-core" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.863017 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="ceilometer-notification-agent" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.863033 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" containerName="ceilometer-central-agent" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.866190 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.868926 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.869522 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.883706 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.971613 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.972243 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 22:13:51 crc kubenswrapper[4803]: I0127 22:13:51.999187 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.012318 4803 scope.go:117] "RemoveContainer" containerID="19a49f108ef5c15b9d40503cacc12835e41a96b2636bcb2fbdd620c587609951" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.013171 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-config-data\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.013276 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-scripts\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.013297 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.013330 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.013363 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-log-httpd\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.013415 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-run-httpd\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.013438 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8kpv\" (UniqueName: \"kubernetes.io/projected/608b9bb2-1f2c-4320-a9a3-50706f74bd06-kube-api-access-f8kpv\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.116501 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-scripts\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.116550 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.116614 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.116657 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-log-httpd\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.116749 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-run-httpd\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.116773 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8kpv\" (UniqueName: \"kubernetes.io/projected/608b9bb2-1f2c-4320-a9a3-50706f74bd06-kube-api-access-f8kpv\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.116948 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-config-data\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.117356 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-log-httpd\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.117764 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-run-httpd\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.133551 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.134136 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.134914 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-config-data\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.136102 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-scripts\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.142002 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8kpv\" (UniqueName: \"kubernetes.io/projected/608b9bb2-1f2c-4320-a9a3-50706f74bd06-kube-api-access-f8kpv\") pod \"ceilometer-0\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.259585 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.327789 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d41adc57-850e-4967-aa19-042dc8e991f9" path="/var/lib/kubelet/pods/d41adc57-850e-4967-aa19-042dc8e991f9/volumes" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.441812 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.529817 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-utilities\") pod \"78144907-f957-40fb-a2f5-c95fe6c56ae7\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.530045 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-catalog-content\") pod \"78144907-f957-40fb-a2f5-c95fe6c56ae7\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.530110 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxdg2\" (UniqueName: \"kubernetes.io/projected/78144907-f957-40fb-a2f5-c95fe6c56ae7-kube-api-access-hxdg2\") pod \"78144907-f957-40fb-a2f5-c95fe6c56ae7\" (UID: \"78144907-f957-40fb-a2f5-c95fe6c56ae7\") " Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.530874 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-utilities" (OuterVolumeSpecName: "utilities") pod "78144907-f957-40fb-a2f5-c95fe6c56ae7" (UID: "78144907-f957-40fb-a2f5-c95fe6c56ae7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.531088 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.538299 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78144907-f957-40fb-a2f5-c95fe6c56ae7-kube-api-access-hxdg2" (OuterVolumeSpecName: "kube-api-access-hxdg2") pod "78144907-f957-40fb-a2f5-c95fe6c56ae7" (UID: "78144907-f957-40fb-a2f5-c95fe6c56ae7"). InnerVolumeSpecName "kube-api-access-hxdg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.554771 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78144907-f957-40fb-a2f5-c95fe6c56ae7" (UID: "78144907-f957-40fb-a2f5-c95fe6c56ae7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.633055 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78144907-f957-40fb-a2f5-c95fe6c56ae7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.633319 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxdg2\" (UniqueName: \"kubernetes.io/projected/78144907-f957-40fb-a2f5-c95fe6c56ae7-kube-api-access-hxdg2\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.757936 4803 generic.go:334] "Generic (PLEG): container finished" podID="78144907-f957-40fb-a2f5-c95fe6c56ae7" containerID="5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0" exitCode=0 Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.758001 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd7bp" event={"ID":"78144907-f957-40fb-a2f5-c95fe6c56ae7","Type":"ContainerDied","Data":"5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0"} Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.758064 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd7bp" event={"ID":"78144907-f957-40fb-a2f5-c95fe6c56ae7","Type":"ContainerDied","Data":"e44fdb8958a2e7f2bddcce7d2ccb962bddf7ac79191273202f4a0d93b49edead"} Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.758017 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hd7bp" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.758089 4803 scope.go:117] "RemoveContainer" containerID="5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0" Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.814043 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hd7bp"] Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.828547 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hd7bp"] Jan 27 22:13:52 crc kubenswrapper[4803]: I0127 22:13:52.846909 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:53 crc kubenswrapper[4803]: W0127 22:13:53.206005 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod608b9bb2_1f2c_4320_a9a3_50706f74bd06.slice/crio-3a0be3e45a28e992afdfc61cde64ad7a6b4eb0bef12201ddd30725122dcb68ee WatchSource:0}: Error finding container 3a0be3e45a28e992afdfc61cde64ad7a6b4eb0bef12201ddd30725122dcb68ee: Status 404 returned error can't find the container with id 3a0be3e45a28e992afdfc61cde64ad7a6b4eb0bef12201ddd30725122dcb68ee Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.217338 4803 scope.go:117] "RemoveContainer" containerID="444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7" Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.290203 4803 scope.go:117] "RemoveContainer" containerID="37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07" Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.378132 4803 scope.go:117] "RemoveContainer" containerID="5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0" Jan 27 22:13:53 crc kubenswrapper[4803]: E0127 22:13:53.378526 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0\": container with ID starting with 5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0 not found: ID does not exist" containerID="5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0" Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.378571 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0"} err="failed to get container status \"5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0\": rpc error: code = NotFound desc = could not find container \"5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0\": container with ID starting with 5166bf5df830f2664a22c7f7dd937096347ed48fc7fa497995ff08246ad6b4e0 not found: ID does not exist" Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.378597 4803 scope.go:117] "RemoveContainer" containerID="444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7" Jan 27 22:13:53 crc kubenswrapper[4803]: E0127 22:13:53.378967 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7\": container with ID starting with 444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7 not found: ID does not exist" containerID="444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7" Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.379092 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7"} err="failed to get container status \"444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7\": rpc error: code = NotFound desc = could not find container \"444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7\": container with ID starting with 444837ad904a604b63e52a53666d9026486c1d87fe1735325f52843d59df4ee7 not found: ID does not exist" Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.379203 4803 scope.go:117] "RemoveContainer" containerID="37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07" Jan 27 22:13:53 crc kubenswrapper[4803]: E0127 22:13:53.379953 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07\": container with ID starting with 37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07 not found: ID does not exist" containerID="37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07" Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.379986 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07"} err="failed to get container status \"37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07\": rpc error: code = NotFound desc = could not find container \"37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07\": container with ID starting with 37c47855e3c50ecfb9412300cb874313d716e4a540a11b62b44a38ff35c2bc07 not found: ID does not exist" Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.770555 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"608b9bb2-1f2c-4320-a9a3-50706f74bd06","Type":"ContainerStarted","Data":"3a0be3e45a28e992afdfc61cde64ad7a6b4eb0bef12201ddd30725122dcb68ee"} Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.777787 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e3bc9280-942c-487a-85ef-3da17fa151ba","Type":"ContainerStarted","Data":"70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b"} Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.777926 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-api" containerID="cri-o://0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6" gracePeriod=30 Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.777994 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-listener" containerID="cri-o://70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b" gracePeriod=30 Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.778033 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-evaluator" containerID="cri-o://ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d" gracePeriod=30 Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.778125 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-notifier" containerID="cri-o://940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269" gracePeriod=30 Jan 27 22:13:53 crc kubenswrapper[4803]: I0127 22:13:53.831955 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.605163373 podStartE2EDuration="10.831926862s" podCreationTimestamp="2026-01-27 22:13:43 +0000 UTC" firstStartedPulling="2026-01-27 22:13:45.1526384 +0000 UTC m=+1577.568660099" lastFinishedPulling="2026-01-27 22:13:53.379401889 +0000 UTC m=+1585.795423588" observedRunningTime="2026-01-27 22:13:53.809495429 +0000 UTC m=+1586.225517148" watchObservedRunningTime="2026-01-27 22:13:53.831926862 +0000 UTC m=+1586.247948561" Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.342156 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78144907-f957-40fb-a2f5-c95fe6c56ae7" path="/var/lib/kubelet/pods/78144907-f957-40fb-a2f5-c95fe6c56ae7/volumes" Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.950688 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.950883 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"608b9bb2-1f2c-4320-a9a3-50706f74bd06","Type":"ContainerStarted","Data":"9db280fddc1d1d40a5889fc043b56922f5f0660cc0d85307f77be9750530b949"} Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.951301 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"608b9bb2-1f2c-4320-a9a3-50706f74bd06","Type":"ContainerStarted","Data":"edff052d2637cf2f506b8d1ec695638b43afa1fcbfb45aba2a8cd9ee54ec31c3"} Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.952397 4803 generic.go:334] "Generic (PLEG): container finished" podID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerID="8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0" exitCode=0 Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.952455 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb00ca76-3437-43a0-ada9-1a37c535137c","Type":"ContainerDied","Data":"8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0"} Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.952473 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb00ca76-3437-43a0-ada9-1a37c535137c","Type":"ContainerDied","Data":"dfe638f65cadbca3aa0dfcaa3de0f76d45265029e9ec266d404842dd48068ccf"} Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.952489 4803 scope.go:117] "RemoveContainer" containerID="8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0" Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.967319 4803 generic.go:334] "Generic (PLEG): container finished" podID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerID="940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269" exitCode=0 Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.967347 4803 generic.go:334] "Generic (PLEG): container finished" podID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerID="ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d" exitCode=0 Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.967355 4803 generic.go:334] "Generic (PLEG): container finished" podID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerID="0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6" exitCode=0 Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.967377 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e3bc9280-942c-487a-85ef-3da17fa151ba","Type":"ContainerDied","Data":"940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269"} Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.967402 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e3bc9280-942c-487a-85ef-3da17fa151ba","Type":"ContainerDied","Data":"ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d"} Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.967411 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e3bc9280-942c-487a-85ef-3da17fa151ba","Type":"ContainerDied","Data":"0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6"} Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.978865 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:13:54 crc kubenswrapper[4803]: I0127 22:13:54.984490 4803 scope.go:117] "RemoveContainer" containerID="66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.021830 4803 scope.go:117] "RemoveContainer" containerID="8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0" Jan 27 22:13:55 crc kubenswrapper[4803]: E0127 22:13:55.022737 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0\": container with ID starting with 8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0 not found: ID does not exist" containerID="8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.022792 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0"} err="failed to get container status \"8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0\": rpc error: code = NotFound desc = could not find container \"8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0\": container with ID starting with 8ce2888d6e3fd00929bda01ff8c3982c20f0ec712b8d6eb288d05fb12ea341f0 not found: ID does not exist" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.022825 4803 scope.go:117] "RemoveContainer" containerID="66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030" Jan 27 22:13:55 crc kubenswrapper[4803]: E0127 22:13:55.023273 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030\": container with ID starting with 66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030 not found: ID does not exist" containerID="66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.023313 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030"} err="failed to get container status \"66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030\": rpc error: code = NotFound desc = could not find container \"66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030\": container with ID starting with 66699521f928a96bd6b4712cac06827153b66991b1c2912ee172cc68cc616030 not found: ID does not exist" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.137350 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-combined-ca-bundle\") pod \"eb00ca76-3437-43a0-ada9-1a37c535137c\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.137619 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-config-data\") pod \"eb00ca76-3437-43a0-ada9-1a37c535137c\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.137950 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn4lc\" (UniqueName: \"kubernetes.io/projected/eb00ca76-3437-43a0-ada9-1a37c535137c-kube-api-access-hn4lc\") pod \"eb00ca76-3437-43a0-ada9-1a37c535137c\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.138153 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb00ca76-3437-43a0-ada9-1a37c535137c-logs\") pod \"eb00ca76-3437-43a0-ada9-1a37c535137c\" (UID: \"eb00ca76-3437-43a0-ada9-1a37c535137c\") " Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.139358 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb00ca76-3437-43a0-ada9-1a37c535137c-logs" (OuterVolumeSpecName: "logs") pod "eb00ca76-3437-43a0-ada9-1a37c535137c" (UID: "eb00ca76-3437-43a0-ada9-1a37c535137c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.144570 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb00ca76-3437-43a0-ada9-1a37c535137c-kube-api-access-hn4lc" (OuterVolumeSpecName: "kube-api-access-hn4lc") pod "eb00ca76-3437-43a0-ada9-1a37c535137c" (UID: "eb00ca76-3437-43a0-ada9-1a37c535137c"). InnerVolumeSpecName "kube-api-access-hn4lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.176921 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb00ca76-3437-43a0-ada9-1a37c535137c" (UID: "eb00ca76-3437-43a0-ada9-1a37c535137c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.197648 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-config-data" (OuterVolumeSpecName: "config-data") pod "eb00ca76-3437-43a0-ada9-1a37c535137c" (UID: "eb00ca76-3437-43a0-ada9-1a37c535137c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.245159 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb00ca76-3437-43a0-ada9-1a37c535137c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.245202 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.245236 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb00ca76-3437-43a0-ada9-1a37c535137c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.245245 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn4lc\" (UniqueName: \"kubernetes.io/projected/eb00ca76-3437-43a0-ada9-1a37c535137c-kube-api-access-hn4lc\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.981804 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:13:55 crc kubenswrapper[4803]: I0127 22:13:55.986391 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"608b9bb2-1f2c-4320-a9a3-50706f74bd06","Type":"ContainerStarted","Data":"039af7d9b6c6cd3077975658bcb218c537c2f133e25ae39b65d3215535a3c952"} Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.027837 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.039867 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.056465 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:56 crc kubenswrapper[4803]: E0127 22:13:56.057016 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerName="nova-api-log" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.057037 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerName="nova-api-log" Jan 27 22:13:56 crc kubenswrapper[4803]: E0127 22:13:56.057063 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78144907-f957-40fb-a2f5-c95fe6c56ae7" containerName="extract-utilities" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.057072 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="78144907-f957-40fb-a2f5-c95fe6c56ae7" containerName="extract-utilities" Jan 27 22:13:56 crc kubenswrapper[4803]: E0127 22:13:56.057084 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78144907-f957-40fb-a2f5-c95fe6c56ae7" containerName="extract-content" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.057090 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="78144907-f957-40fb-a2f5-c95fe6c56ae7" containerName="extract-content" Jan 27 22:13:56 crc kubenswrapper[4803]: E0127 22:13:56.057103 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerName="nova-api-api" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.057109 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerName="nova-api-api" Jan 27 22:13:56 crc kubenswrapper[4803]: E0127 22:13:56.057121 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78144907-f957-40fb-a2f5-c95fe6c56ae7" containerName="registry-server" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.057128 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="78144907-f957-40fb-a2f5-c95fe6c56ae7" containerName="registry-server" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.057353 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="78144907-f957-40fb-a2f5-c95fe6c56ae7" containerName="registry-server" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.057376 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerName="nova-api-api" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.057395 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb00ca76-3437-43a0-ada9-1a37c535137c" containerName="nova-api-log" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.059432 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.061751 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.062348 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.077096 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.089967 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.164721 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/495d58f5-c8ce-44c6-a844-57ec21deb347-logs\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.164774 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-public-tls-certs\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.164823 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.165170 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f27qh\" (UniqueName: \"kubernetes.io/projected/495d58f5-c8ce-44c6-a844-57ec21deb347-kube-api-access-f27qh\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.165245 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-internal-tls-certs\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.165376 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-config-data\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.267598 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-config-data\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.267721 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/495d58f5-c8ce-44c6-a844-57ec21deb347-logs\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.267758 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-public-tls-certs\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.267809 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.267979 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f27qh\" (UniqueName: \"kubernetes.io/projected/495d58f5-c8ce-44c6-a844-57ec21deb347-kube-api-access-f27qh\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.268016 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-internal-tls-certs\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.268740 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/495d58f5-c8ce-44c6-a844-57ec21deb347-logs\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.273194 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-public-tls-certs\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.274478 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-config-data\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.274901 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.274988 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-internal-tls-certs\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.288901 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f27qh\" (UniqueName: \"kubernetes.io/projected/495d58f5-c8ce-44c6-a844-57ec21deb347-kube-api-access-f27qh\") pod \"nova-api-0\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.319638 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb00ca76-3437-43a0-ada9-1a37c535137c" path="/var/lib/kubelet/pods/eb00ca76-3437-43a0-ada9-1a37c535137c/volumes" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.376897 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.970877 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.971227 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.999093 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:56 crc kubenswrapper[4803]: I0127 22:13:56.999127 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"608b9bb2-1f2c-4320-a9a3-50706f74bd06","Type":"ContainerStarted","Data":"ef7f47c41d624c806bc9cbd6715e96179a3b2b116339d824f78723399b6f06ef"} Jan 27 22:13:57 crc kubenswrapper[4803]: I0127 22:13:56.999357 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="ceilometer-notification-agent" containerID="cri-o://9db280fddc1d1d40a5889fc043b56922f5f0660cc0d85307f77be9750530b949" gracePeriod=30 Jan 27 22:13:57 crc kubenswrapper[4803]: I0127 22:13:56.999370 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="sg-core" containerID="cri-o://039af7d9b6c6cd3077975658bcb218c537c2f133e25ae39b65d3215535a3c952" gracePeriod=30 Jan 27 22:13:57 crc kubenswrapper[4803]: I0127 22:13:56.999450 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 22:13:57 crc kubenswrapper[4803]: I0127 22:13:56.999595 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="proxy-httpd" containerID="cri-o://ef7f47c41d624c806bc9cbd6715e96179a3b2b116339d824f78723399b6f06ef" gracePeriod=30 Jan 27 22:13:57 crc kubenswrapper[4803]: I0127 22:13:57.000124 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="ceilometer-central-agent" containerID="cri-o://edff052d2637cf2f506b8d1ec695638b43afa1fcbfb45aba2a8cd9ee54ec31c3" gracePeriod=30 Jan 27 22:13:57 crc kubenswrapper[4803]: I0127 22:13:57.023449 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.755943818 podStartE2EDuration="6.023431253s" podCreationTimestamp="2026-01-27 22:13:51 +0000 UTC" firstStartedPulling="2026-01-27 22:13:53.21734328 +0000 UTC m=+1585.633364979" lastFinishedPulling="2026-01-27 22:13:56.484830715 +0000 UTC m=+1588.900852414" observedRunningTime="2026-01-27 22:13:57.019234591 +0000 UTC m=+1589.435256290" watchObservedRunningTime="2026-01-27 22:13:57.023431253 +0000 UTC m=+1589.439452942" Jan 27 22:13:57 crc kubenswrapper[4803]: I0127 22:13:57.033072 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:57 crc kubenswrapper[4803]: I0127 22:13:57.090076 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:13:57 crc kubenswrapper[4803]: I0127 22:13:57.989137 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 22:13:57 crc kubenswrapper[4803]: I0127 22:13:57.989171 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.011308 4803 generic.go:334] "Generic (PLEG): container finished" podID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerID="ef7f47c41d624c806bc9cbd6715e96179a3b2b116339d824f78723399b6f06ef" exitCode=0 Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.011340 4803 generic.go:334] "Generic (PLEG): container finished" podID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerID="039af7d9b6c6cd3077975658bcb218c537c2f133e25ae39b65d3215535a3c952" exitCode=2 Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.011349 4803 generic.go:334] "Generic (PLEG): container finished" podID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerID="9db280fddc1d1d40a5889fc043b56922f5f0660cc0d85307f77be9750530b949" exitCode=0 Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.011383 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"608b9bb2-1f2c-4320-a9a3-50706f74bd06","Type":"ContainerDied","Data":"ef7f47c41d624c806bc9cbd6715e96179a3b2b116339d824f78723399b6f06ef"} Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.011409 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"608b9bb2-1f2c-4320-a9a3-50706f74bd06","Type":"ContainerDied","Data":"039af7d9b6c6cd3077975658bcb218c537c2f133e25ae39b65d3215535a3c952"} Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.011419 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"608b9bb2-1f2c-4320-a9a3-50706f74bd06","Type":"ContainerDied","Data":"9db280fddc1d1d40a5889fc043b56922f5f0660cc0d85307f77be9750530b949"} Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.014692 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"495d58f5-c8ce-44c6-a844-57ec21deb347","Type":"ContainerStarted","Data":"3a021159c130f2955ab5d6fa5197efeec94ff6a0b1457745f5f27f685e425460"} Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.014725 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"495d58f5-c8ce-44c6-a844-57ec21deb347","Type":"ContainerStarted","Data":"a2faefefef931713e71f679be4f6f76cc6b77392d31d0b9d590be9e7a79c4566"} Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.014734 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"495d58f5-c8ce-44c6-a844-57ec21deb347","Type":"ContainerStarted","Data":"1a758f086b68857a457ec68306447a2405f665530df4dc762dcbd96a49ab8afe"} Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.035724 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.049840 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.049814843 podStartE2EDuration="2.049814843s" podCreationTimestamp="2026-01-27 22:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:13:58.040885853 +0000 UTC m=+1590.456907562" watchObservedRunningTime="2026-01-27 22:13:58.049814843 +0000 UTC m=+1590.465836542" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.223594 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-r9w4g"] Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.225811 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.228607 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.228654 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.238683 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-r9w4g"] Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.333474 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.333913 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-scripts\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.334010 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4skb\" (UniqueName: \"kubernetes.io/projected/5fef152e-fc32-4940-9c38-193b933f28ad-kube-api-access-s4skb\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.334136 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-config-data\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.436769 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.436876 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-scripts\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.436953 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4skb\" (UniqueName: \"kubernetes.io/projected/5fef152e-fc32-4940-9c38-193b933f28ad-kube-api-access-s4skb\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.436984 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-config-data\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.443325 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-scripts\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.444406 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.449707 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-config-data\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.457819 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4skb\" (UniqueName: \"kubernetes.io/projected/5fef152e-fc32-4940-9c38-193b933f28ad-kube-api-access-s4skb\") pod \"nova-cell1-cell-mapping-r9w4g\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.545757 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.593334 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.672810 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pf55t"] Jan 27 22:13:58 crc kubenswrapper[4803]: I0127 22:13:58.673114 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" podUID="70bbf6db-858e-41ec-a079-876f60dc0501" containerName="dnsmasq-dns" containerID="cri-o://7065527c8645fa3b090595903cdfd6183b57f6c8b5eaea4686b06100af778f9a" gracePeriod=10 Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.028517 4803 generic.go:334] "Generic (PLEG): container finished" podID="70bbf6db-858e-41ec-a079-876f60dc0501" containerID="7065527c8645fa3b090595903cdfd6183b57f6c8b5eaea4686b06100af778f9a" exitCode=0 Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.028882 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" event={"ID":"70bbf6db-858e-41ec-a079-876f60dc0501","Type":"ContainerDied","Data":"7065527c8645fa3b090595903cdfd6183b57f6c8b5eaea4686b06100af778f9a"} Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.211642 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-r9w4g"] Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.482188 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.672947 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-svc\") pod \"70bbf6db-858e-41ec-a079-876f60dc0501\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.673277 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-swift-storage-0\") pod \"70bbf6db-858e-41ec-a079-876f60dc0501\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.673384 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-nb\") pod \"70bbf6db-858e-41ec-a079-876f60dc0501\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.673463 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hwl2\" (UniqueName: \"kubernetes.io/projected/70bbf6db-858e-41ec-a079-876f60dc0501-kube-api-access-2hwl2\") pod \"70bbf6db-858e-41ec-a079-876f60dc0501\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.673730 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-config\") pod \"70bbf6db-858e-41ec-a079-876f60dc0501\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.673820 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-sb\") pod \"70bbf6db-858e-41ec-a079-876f60dc0501\" (UID: \"70bbf6db-858e-41ec-a079-876f60dc0501\") " Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.681999 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70bbf6db-858e-41ec-a079-876f60dc0501-kube-api-access-2hwl2" (OuterVolumeSpecName: "kube-api-access-2hwl2") pod "70bbf6db-858e-41ec-a079-876f60dc0501" (UID: "70bbf6db-858e-41ec-a079-876f60dc0501"). InnerVolumeSpecName "kube-api-access-2hwl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.776594 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hwl2\" (UniqueName: \"kubernetes.io/projected/70bbf6db-858e-41ec-a079-876f60dc0501-kube-api-access-2hwl2\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.859630 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70bbf6db-858e-41ec-a079-876f60dc0501" (UID: "70bbf6db-858e-41ec-a079-876f60dc0501"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.885338 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "70bbf6db-858e-41ec-a079-876f60dc0501" (UID: "70bbf6db-858e-41ec-a079-876f60dc0501"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.885756 4803 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.885792 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.909437 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-config" (OuterVolumeSpecName: "config") pod "70bbf6db-858e-41ec-a079-876f60dc0501" (UID: "70bbf6db-858e-41ec-a079-876f60dc0501"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.927132 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70bbf6db-858e-41ec-a079-876f60dc0501" (UID: "70bbf6db-858e-41ec-a079-876f60dc0501"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.963336 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70bbf6db-858e-41ec-a079-876f60dc0501" (UID: "70bbf6db-858e-41ec-a079-876f60dc0501"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.989073 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.989106 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:13:59 crc kubenswrapper[4803]: I0127 22:13:59.989115 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70bbf6db-858e-41ec-a079-876f60dc0501-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.038197 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fvf7x"] Jan 27 22:14:00 crc kubenswrapper[4803]: E0127 22:14:00.039168 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70bbf6db-858e-41ec-a079-876f60dc0501" containerName="init" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.039189 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="70bbf6db-858e-41ec-a079-876f60dc0501" containerName="init" Jan 27 22:14:00 crc kubenswrapper[4803]: E0127 22:14:00.039380 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70bbf6db-858e-41ec-a079-876f60dc0501" containerName="dnsmasq-dns" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.039399 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="70bbf6db-858e-41ec-a079-876f60dc0501" containerName="dnsmasq-dns" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.043926 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="70bbf6db-858e-41ec-a079-876f60dc0501" containerName="dnsmasq-dns" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.076807 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.084516 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r9w4g" event={"ID":"5fef152e-fc32-4940-9c38-193b933f28ad","Type":"ContainerStarted","Data":"f0b199a22b85a2febb197584cc45ac7d491db5f1829cfcea3fc939e5eea3ff64"} Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.084574 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r9w4g" event={"ID":"5fef152e-fc32-4940-9c38-193b933f28ad","Type":"ContainerStarted","Data":"3985c80ed18f64537d04b0699b39678ef646ba00abe17f887b529d17791b16dc"} Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.097694 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" event={"ID":"70bbf6db-858e-41ec-a079-876f60dc0501","Type":"ContainerDied","Data":"041f6e9d1021ffdcc5f60b30ab67569f55a68526fee231f2fb5f6670921f629a"} Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.098103 4803 scope.go:117] "RemoveContainer" containerID="7065527c8645fa3b090595903cdfd6183b57f6c8b5eaea4686b06100af778f9a" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.098378 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-pf55t" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.114161 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvf7x"] Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.131469 4803 scope.go:117] "RemoveContainer" containerID="ba5ad5a4e78c61b237990870658214e5e203e25e481ba94f6a9f97210ebc082e" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.174314 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-r9w4g" podStartSLOduration=2.174284581 podStartE2EDuration="2.174284581s" podCreationTimestamp="2026-01-27 22:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:14:00.134293486 +0000 UTC m=+1592.550315195" watchObservedRunningTime="2026-01-27 22:14:00.174284581 +0000 UTC m=+1592.590306290" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.202225 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pf55t"] Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.209287 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxqzm\" (UniqueName: \"kubernetes.io/projected/b788d72b-5d6c-4f1a-8856-cfc292f76d72-kube-api-access-bxqzm\") pod \"community-operators-fvf7x\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.209383 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-catalog-content\") pod \"community-operators-fvf7x\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.209501 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-utilities\") pod \"community-operators-fvf7x\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.216734 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pf55t"] Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.311289 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-utilities\") pod \"community-operators-fvf7x\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.311640 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxqzm\" (UniqueName: \"kubernetes.io/projected/b788d72b-5d6c-4f1a-8856-cfc292f76d72-kube-api-access-bxqzm\") pod \"community-operators-fvf7x\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.311769 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-utilities\") pod \"community-operators-fvf7x\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.311881 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-catalog-content\") pod \"community-operators-fvf7x\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.312175 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-catalog-content\") pod \"community-operators-fvf7x\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.321640 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70bbf6db-858e-41ec-a079-876f60dc0501" path="/var/lib/kubelet/pods/70bbf6db-858e-41ec-a079-876f60dc0501/volumes" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.331294 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxqzm\" (UniqueName: \"kubernetes.io/projected/b788d72b-5d6c-4f1a-8856-cfc292f76d72-kube-api-access-bxqzm\") pod \"community-operators-fvf7x\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.437036 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:00 crc kubenswrapper[4803]: I0127 22:14:00.938551 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvf7x"] Jan 27 22:14:00 crc kubenswrapper[4803]: W0127 22:14:00.938572 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb788d72b_5d6c_4f1a_8856_cfc292f76d72.slice/crio-67ee3c0a40a9359fb7eea4463ad815d5f6d8c4e7ab80857641ddef93fb9e5dd8 WatchSource:0}: Error finding container 67ee3c0a40a9359fb7eea4463ad815d5f6d8c4e7ab80857641ddef93fb9e5dd8: Status 404 returned error can't find the container with id 67ee3c0a40a9359fb7eea4463ad815d5f6d8c4e7ab80857641ddef93fb9e5dd8 Jan 27 22:14:01 crc kubenswrapper[4803]: I0127 22:14:01.109501 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf7x" event={"ID":"b788d72b-5d6c-4f1a-8856-cfc292f76d72","Type":"ContainerStarted","Data":"67ee3c0a40a9359fb7eea4463ad815d5f6d8c4e7ab80857641ddef93fb9e5dd8"} Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.135442 4803 generic.go:334] "Generic (PLEG): container finished" podID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerID="74f237c091ef1652de9f6c289906d3b02738979f9f99a3ec0d75968dee141f97" exitCode=0 Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.135542 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf7x" event={"ID":"b788d72b-5d6c-4f1a-8856-cfc292f76d72","Type":"ContainerDied","Data":"74f237c091ef1652de9f6c289906d3b02738979f9f99a3ec0d75968dee141f97"} Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.143357 4803 generic.go:334] "Generic (PLEG): container finished" podID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerID="edff052d2637cf2f506b8d1ec695638b43afa1fcbfb45aba2a8cd9ee54ec31c3" exitCode=0 Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.143388 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"608b9bb2-1f2c-4320-a9a3-50706f74bd06","Type":"ContainerDied","Data":"edff052d2637cf2f506b8d1ec695638b43afa1fcbfb45aba2a8cd9ee54ec31c3"} Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.501894 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.666259 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-combined-ca-bundle\") pod \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.666483 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8kpv\" (UniqueName: \"kubernetes.io/projected/608b9bb2-1f2c-4320-a9a3-50706f74bd06-kube-api-access-f8kpv\") pod \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.666567 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-config-data\") pod \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.666591 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-run-httpd\") pod \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.666638 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-sg-core-conf-yaml\") pod \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.666663 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-scripts\") pod \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.666760 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-log-httpd\") pod \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\" (UID: \"608b9bb2-1f2c-4320-a9a3-50706f74bd06\") " Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.668200 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "608b9bb2-1f2c-4320-a9a3-50706f74bd06" (UID: "608b9bb2-1f2c-4320-a9a3-50706f74bd06"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.668664 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "608b9bb2-1f2c-4320-a9a3-50706f74bd06" (UID: "608b9bb2-1f2c-4320-a9a3-50706f74bd06"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.674676 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-scripts" (OuterVolumeSpecName: "scripts") pod "608b9bb2-1f2c-4320-a9a3-50706f74bd06" (UID: "608b9bb2-1f2c-4320-a9a3-50706f74bd06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.683182 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608b9bb2-1f2c-4320-a9a3-50706f74bd06-kube-api-access-f8kpv" (OuterVolumeSpecName: "kube-api-access-f8kpv") pod "608b9bb2-1f2c-4320-a9a3-50706f74bd06" (UID: "608b9bb2-1f2c-4320-a9a3-50706f74bd06"). InnerVolumeSpecName "kube-api-access-f8kpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.702203 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "608b9bb2-1f2c-4320-a9a3-50706f74bd06" (UID: "608b9bb2-1f2c-4320-a9a3-50706f74bd06"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.769715 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.769748 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8kpv\" (UniqueName: \"kubernetes.io/projected/608b9bb2-1f2c-4320-a9a3-50706f74bd06-kube-api-access-f8kpv\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.769759 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/608b9bb2-1f2c-4320-a9a3-50706f74bd06-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.769767 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.769775 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.808790 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-config-data" (OuterVolumeSpecName: "config-data") pod "608b9bb2-1f2c-4320-a9a3-50706f74bd06" (UID: "608b9bb2-1f2c-4320-a9a3-50706f74bd06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.830103 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "608b9bb2-1f2c-4320-a9a3-50706f74bd06" (UID: "608b9bb2-1f2c-4320-a9a3-50706f74bd06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.872927 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:02 crc kubenswrapper[4803]: I0127 22:14:02.873252 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608b9bb2-1f2c-4320-a9a3-50706f74bd06-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.156455 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf7x" event={"ID":"b788d72b-5d6c-4f1a-8856-cfc292f76d72","Type":"ContainerStarted","Data":"cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a"} Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.160957 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"608b9bb2-1f2c-4320-a9a3-50706f74bd06","Type":"ContainerDied","Data":"3a0be3e45a28e992afdfc61cde64ad7a6b4eb0bef12201ddd30725122dcb68ee"} Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.161005 4803 scope.go:117] "RemoveContainer" containerID="ef7f47c41d624c806bc9cbd6715e96179a3b2b116339d824f78723399b6f06ef" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.161330 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.192581 4803 scope.go:117] "RemoveContainer" containerID="039af7d9b6c6cd3077975658bcb218c537c2f133e25ae39b65d3215535a3c952" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.217942 4803 scope.go:117] "RemoveContainer" containerID="9db280fddc1d1d40a5889fc043b56922f5f0660cc0d85307f77be9750530b949" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.231806 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.247444 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.263320 4803 scope.go:117] "RemoveContainer" containerID="edff052d2637cf2f506b8d1ec695638b43afa1fcbfb45aba2a8cd9ee54ec31c3" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.281291 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:03 crc kubenswrapper[4803]: E0127 22:14:03.281914 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="ceilometer-central-agent" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.281936 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="ceilometer-central-agent" Jan 27 22:14:03 crc kubenswrapper[4803]: E0127 22:14:03.281965 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="sg-core" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.281974 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="sg-core" Jan 27 22:14:03 crc kubenswrapper[4803]: E0127 22:14:03.282008 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="proxy-httpd" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.282017 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="proxy-httpd" Jan 27 22:14:03 crc kubenswrapper[4803]: E0127 22:14:03.282054 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="ceilometer-notification-agent" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.282061 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="ceilometer-notification-agent" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.282293 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="ceilometer-central-agent" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.282312 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="proxy-httpd" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.282327 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="sg-core" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.282353 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" containerName="ceilometer-notification-agent" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.297019 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.303750 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.303999 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.310802 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.387343 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.387388 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-run-httpd\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.387419 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.387435 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbt7z\" (UniqueName: \"kubernetes.io/projected/e10f98c4-901d-4c47-b9a7-67fb0521d204-kube-api-access-mbt7z\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.387480 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-log-httpd\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.387517 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-scripts\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.387555 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-config-data\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.489220 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.489279 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-run-httpd\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.489310 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.489333 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbt7z\" (UniqueName: \"kubernetes.io/projected/e10f98c4-901d-4c47-b9a7-67fb0521d204-kube-api-access-mbt7z\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.489388 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-log-httpd\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.489426 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-scripts\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.489463 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-config-data\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.490313 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-run-httpd\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.490667 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-log-httpd\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.494145 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.494258 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.495551 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-scripts\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.510623 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbt7z\" (UniqueName: \"kubernetes.io/projected/e10f98c4-901d-4c47-b9a7-67fb0521d204-kube-api-access-mbt7z\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.523893 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-config-data\") pod \"ceilometer-0\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " pod="openstack/ceilometer-0" Jan 27 22:14:03 crc kubenswrapper[4803]: I0127 22:14:03.648470 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:04 crc kubenswrapper[4803]: I0127 22:14:04.212074 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:04 crc kubenswrapper[4803]: W0127 22:14:04.249122 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode10f98c4_901d_4c47_b9a7_67fb0521d204.slice/crio-033491ae1555bdbabf17739c816b1cab2febf8780b47c9cefe1a6097aedbbd63 WatchSource:0}: Error finding container 033491ae1555bdbabf17739c816b1cab2febf8780b47c9cefe1a6097aedbbd63: Status 404 returned error can't find the container with id 033491ae1555bdbabf17739c816b1cab2febf8780b47c9cefe1a6097aedbbd63 Jan 27 22:14:04 crc kubenswrapper[4803]: I0127 22:14:04.336899 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="608b9bb2-1f2c-4320-a9a3-50706f74bd06" path="/var/lib/kubelet/pods/608b9bb2-1f2c-4320-a9a3-50706f74bd06/volumes" Jan 27 22:14:05 crc kubenswrapper[4803]: I0127 22:14:05.189507 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e10f98c4-901d-4c47-b9a7-67fb0521d204","Type":"ContainerStarted","Data":"35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b"} Jan 27 22:14:05 crc kubenswrapper[4803]: I0127 22:14:05.190057 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e10f98c4-901d-4c47-b9a7-67fb0521d204","Type":"ContainerStarted","Data":"033491ae1555bdbabf17739c816b1cab2febf8780b47c9cefe1a6097aedbbd63"} Jan 27 22:14:05 crc kubenswrapper[4803]: I0127 22:14:05.194675 4803 generic.go:334] "Generic (PLEG): container finished" podID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerID="cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a" exitCode=0 Jan 27 22:14:05 crc kubenswrapper[4803]: I0127 22:14:05.194735 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf7x" event={"ID":"b788d72b-5d6c-4f1a-8856-cfc292f76d72","Type":"ContainerDied","Data":"cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a"} Jan 27 22:14:05 crc kubenswrapper[4803]: I0127 22:14:05.307403 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:14:05 crc kubenswrapper[4803]: E0127 22:14:05.307818 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:14:06 crc kubenswrapper[4803]: I0127 22:14:06.207197 4803 generic.go:334] "Generic (PLEG): container finished" podID="5fef152e-fc32-4940-9c38-193b933f28ad" containerID="f0b199a22b85a2febb197584cc45ac7d491db5f1829cfcea3fc939e5eea3ff64" exitCode=0 Jan 27 22:14:06 crc kubenswrapper[4803]: I0127 22:14:06.207256 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r9w4g" event={"ID":"5fef152e-fc32-4940-9c38-193b933f28ad","Type":"ContainerDied","Data":"f0b199a22b85a2febb197584cc45ac7d491db5f1829cfcea3fc939e5eea3ff64"} Jan 27 22:14:06 crc kubenswrapper[4803]: I0127 22:14:06.210689 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf7x" event={"ID":"b788d72b-5d6c-4f1a-8856-cfc292f76d72","Type":"ContainerStarted","Data":"7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90"} Jan 27 22:14:06 crc kubenswrapper[4803]: I0127 22:14:06.212454 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e10f98c4-901d-4c47-b9a7-67fb0521d204","Type":"ContainerStarted","Data":"b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d"} Jan 27 22:14:06 crc kubenswrapper[4803]: I0127 22:14:06.251590 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fvf7x" podStartSLOduration=3.7958872230000003 podStartE2EDuration="7.25156918s" podCreationTimestamp="2026-01-27 22:13:59 +0000 UTC" firstStartedPulling="2026-01-27 22:14:02.138102758 +0000 UTC m=+1594.554124457" lastFinishedPulling="2026-01-27 22:14:05.593784715 +0000 UTC m=+1598.009806414" observedRunningTime="2026-01-27 22:14:06.245875157 +0000 UTC m=+1598.661896876" watchObservedRunningTime="2026-01-27 22:14:06.25156918 +0000 UTC m=+1598.667590879" Jan 27 22:14:06 crc kubenswrapper[4803]: I0127 22:14:06.379138 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 22:14:06 crc kubenswrapper[4803]: I0127 22:14:06.379198 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 22:14:06 crc kubenswrapper[4803]: I0127 22:14:06.996587 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 22:14:07 crc kubenswrapper[4803]: I0127 22:14:07.019080 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 22:14:07 crc kubenswrapper[4803]: I0127 22:14:07.058647 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 22:14:07 crc kubenswrapper[4803]: I0127 22:14:07.230777 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e10f98c4-901d-4c47-b9a7-67fb0521d204","Type":"ContainerStarted","Data":"20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5"} Jan 27 22:14:07 crc kubenswrapper[4803]: I0127 22:14:07.241381 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 22:14:07 crc kubenswrapper[4803]: I0127 22:14:07.429012 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.5:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 22:14:07 crc kubenswrapper[4803]: I0127 22:14:07.429082 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.5:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.038284 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.230173 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-config-data\") pod \"5fef152e-fc32-4940-9c38-193b933f28ad\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.230480 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4skb\" (UniqueName: \"kubernetes.io/projected/5fef152e-fc32-4940-9c38-193b933f28ad-kube-api-access-s4skb\") pod \"5fef152e-fc32-4940-9c38-193b933f28ad\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.230622 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-combined-ca-bundle\") pod \"5fef152e-fc32-4940-9c38-193b933f28ad\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.230677 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-scripts\") pod \"5fef152e-fc32-4940-9c38-193b933f28ad\" (UID: \"5fef152e-fc32-4940-9c38-193b933f28ad\") " Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.235043 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fef152e-fc32-4940-9c38-193b933f28ad-kube-api-access-s4skb" (OuterVolumeSpecName: "kube-api-access-s4skb") pod "5fef152e-fc32-4940-9c38-193b933f28ad" (UID: "5fef152e-fc32-4940-9c38-193b933f28ad"). InnerVolumeSpecName "kube-api-access-s4skb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.238550 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-scripts" (OuterVolumeSpecName: "scripts") pod "5fef152e-fc32-4940-9c38-193b933f28ad" (UID: "5fef152e-fc32-4940-9c38-193b933f28ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.255592 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e10f98c4-901d-4c47-b9a7-67fb0521d204","Type":"ContainerStarted","Data":"fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d"} Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.257504 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.267458 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-config-data" (OuterVolumeSpecName: "config-data") pod "5fef152e-fc32-4940-9c38-193b933f28ad" (UID: "5fef152e-fc32-4940-9c38-193b933f28ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.276364 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r9w4g" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.276991 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r9w4g" event={"ID":"5fef152e-fc32-4940-9c38-193b933f28ad","Type":"ContainerDied","Data":"3985c80ed18f64537d04b0699b39678ef646ba00abe17f887b529d17791b16dc"} Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.277024 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3985c80ed18f64537d04b0699b39678ef646ba00abe17f887b529d17791b16dc" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.294334 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6968203659999999 podStartE2EDuration="5.294312509s" podCreationTimestamp="2026-01-27 22:14:03 +0000 UTC" firstStartedPulling="2026-01-27 22:14:04.251789256 +0000 UTC m=+1596.667810955" lastFinishedPulling="2026-01-27 22:14:07.849281399 +0000 UTC m=+1600.265303098" observedRunningTime="2026-01-27 22:14:08.282617575 +0000 UTC m=+1600.698639274" watchObservedRunningTime="2026-01-27 22:14:08.294312509 +0000 UTC m=+1600.710334208" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.320446 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fef152e-fc32-4940-9c38-193b933f28ad" (UID: "5fef152e-fc32-4940-9c38-193b933f28ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.335089 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.335141 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.335156 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fef152e-fc32-4940-9c38-193b933f28ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.335169 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4skb\" (UniqueName: \"kubernetes.io/projected/5fef152e-fc32-4940-9c38-193b933f28ad-kube-api-access-s4skb\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.413085 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.413325 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerName="nova-api-log" containerID="cri-o://a2faefefef931713e71f679be4f6f76cc6b77392d31d0b9d590be9e7a79c4566" gracePeriod=30 Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.413407 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerName="nova-api-api" containerID="cri-o://3a021159c130f2955ab5d6fa5197efeec94ff6a0b1457745f5f27f685e425460" gracePeriod=30 Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.428750 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.428978 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="da532001-f5a2-4f8d-99ca-c2b8b35fd77a" containerName="nova-scheduler-scheduler" containerID="cri-o://87b939a98ee9f41caf682a9438652bbea98de5063a0ce0f39648ae86da827980" gracePeriod=30 Jan 27 22:14:08 crc kubenswrapper[4803]: I0127 22:14:08.457364 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:14:08 crc kubenswrapper[4803]: E0127 22:14:08.630947 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="87b939a98ee9f41caf682a9438652bbea98de5063a0ce0f39648ae86da827980" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 22:14:08 crc kubenswrapper[4803]: E0127 22:14:08.634937 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="87b939a98ee9f41caf682a9438652bbea98de5063a0ce0f39648ae86da827980" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 22:14:08 crc kubenswrapper[4803]: E0127 22:14:08.636934 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="87b939a98ee9f41caf682a9438652bbea98de5063a0ce0f39648ae86da827980" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 22:14:08 crc kubenswrapper[4803]: E0127 22:14:08.636978 4803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="da532001-f5a2-4f8d-99ca-c2b8b35fd77a" containerName="nova-scheduler-scheduler" Jan 27 22:14:09 crc kubenswrapper[4803]: I0127 22:14:09.298579 4803 generic.go:334] "Generic (PLEG): container finished" podID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerID="a2faefefef931713e71f679be4f6f76cc6b77392d31d0b9d590be9e7a79c4566" exitCode=143 Jan 27 22:14:09 crc kubenswrapper[4803]: I0127 22:14:09.299669 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"495d58f5-c8ce-44c6-a844-57ec21deb347","Type":"ContainerDied","Data":"a2faefefef931713e71f679be4f6f76cc6b77392d31d0b9d590be9e7a79c4566"} Jan 27 22:14:10 crc kubenswrapper[4803]: I0127 22:14:10.307634 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-log" containerID="cri-o://b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c" gracePeriod=30 Jan 27 22:14:10 crc kubenswrapper[4803]: I0127 22:14:10.307711 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-metadata" containerID="cri-o://bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1" gracePeriod=30 Jan 27 22:14:10 crc kubenswrapper[4803]: I0127 22:14:10.437518 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:10 crc kubenswrapper[4803]: I0127 22:14:10.442186 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:11 crc kubenswrapper[4803]: I0127 22:14:11.318390 4803 generic.go:334] "Generic (PLEG): container finished" podID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerID="b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c" exitCode=143 Jan 27 22:14:11 crc kubenswrapper[4803]: I0127 22:14:11.318493 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c","Type":"ContainerDied","Data":"b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c"} Jan 27 22:14:11 crc kubenswrapper[4803]: I0127 22:14:11.536383 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fvf7x" podUID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerName="registry-server" probeResult="failure" output=< Jan 27 22:14:11 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:14:11 crc kubenswrapper[4803]: > Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.349582 4803 generic.go:334] "Generic (PLEG): container finished" podID="da532001-f5a2-4f8d-99ca-c2b8b35fd77a" containerID="87b939a98ee9f41caf682a9438652bbea98de5063a0ce0f39648ae86da827980" exitCode=0 Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.350156 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da532001-f5a2-4f8d-99ca-c2b8b35fd77a","Type":"ContainerDied","Data":"87b939a98ee9f41caf682a9438652bbea98de5063a0ce0f39648ae86da827980"} Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.547224 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.669018 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdmgx\" (UniqueName: \"kubernetes.io/projected/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-kube-api-access-hdmgx\") pod \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.669308 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-combined-ca-bundle\") pod \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.669355 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-config-data\") pod \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\" (UID: \"da532001-f5a2-4f8d-99ca-c2b8b35fd77a\") " Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.691985 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-kube-api-access-hdmgx" (OuterVolumeSpecName: "kube-api-access-hdmgx") pod "da532001-f5a2-4f8d-99ca-c2b8b35fd77a" (UID: "da532001-f5a2-4f8d-99ca-c2b8b35fd77a"). InnerVolumeSpecName "kube-api-access-hdmgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.701922 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da532001-f5a2-4f8d-99ca-c2b8b35fd77a" (UID: "da532001-f5a2-4f8d-99ca-c2b8b35fd77a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.715283 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-config-data" (OuterVolumeSpecName: "config-data") pod "da532001-f5a2-4f8d-99ca-c2b8b35fd77a" (UID: "da532001-f5a2-4f8d-99ca-c2b8b35fd77a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.737284 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": read tcp 10.217.0.2:57482->10.217.1.1:8775: read: connection reset by peer" Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.737284 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": read tcp 10.217.0.2:57470->10.217.1.1:8775: read: connection reset by peer" Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.771115 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.771155 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:13 crc kubenswrapper[4803]: I0127 22:14:13.771167 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdmgx\" (UniqueName: \"kubernetes.io/projected/da532001-f5a2-4f8d-99ca-c2b8b35fd77a-kube-api-access-hdmgx\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.365190 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.376170 4803 generic.go:334] "Generic (PLEG): container finished" podID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerID="3a021159c130f2955ab5d6fa5197efeec94ff6a0b1457745f5f27f685e425460" exitCode=0 Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.376286 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"495d58f5-c8ce-44c6-a844-57ec21deb347","Type":"ContainerDied","Data":"3a021159c130f2955ab5d6fa5197efeec94ff6a0b1457745f5f27f685e425460"} Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.395109 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da532001-f5a2-4f8d-99ca-c2b8b35fd77a","Type":"ContainerDied","Data":"2e20810ba6c239212d7b3f018e23561a53ae04117b0d4799b2aa9575f0dc2723"} Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.395156 4803 scope.go:117] "RemoveContainer" containerID="87b939a98ee9f41caf682a9438652bbea98de5063a0ce0f39648ae86da827980" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.395265 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.420245 4803 generic.go:334] "Generic (PLEG): container finished" podID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerID="bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1" exitCode=0 Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.420295 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.420295 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c","Type":"ContainerDied","Data":"bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1"} Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.420472 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c","Type":"ContainerDied","Data":"4c05dab4c0c4f31e310542389c8cd8fc89323f246ec152e38c3f00ea3499d54a"} Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.433490 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.449958 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.453986 4803 scope.go:117] "RemoveContainer" containerID="bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.460877 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:14:14 crc kubenswrapper[4803]: E0127 22:14:14.461362 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-metadata" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.461374 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-metadata" Jan 27 22:14:14 crc kubenswrapper[4803]: E0127 22:14:14.461392 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-log" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.461397 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-log" Jan 27 22:14:14 crc kubenswrapper[4803]: E0127 22:14:14.461415 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fef152e-fc32-4940-9c38-193b933f28ad" containerName="nova-manage" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.461421 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fef152e-fc32-4940-9c38-193b933f28ad" containerName="nova-manage" Jan 27 22:14:14 crc kubenswrapper[4803]: E0127 22:14:14.461429 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da532001-f5a2-4f8d-99ca-c2b8b35fd77a" containerName="nova-scheduler-scheduler" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.461435 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="da532001-f5a2-4f8d-99ca-c2b8b35fd77a" containerName="nova-scheduler-scheduler" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.461689 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fef152e-fc32-4940-9c38-193b933f28ad" containerName="nova-manage" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.461706 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="da532001-f5a2-4f8d-99ca-c2b8b35fd77a" containerName="nova-scheduler-scheduler" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.461715 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-metadata" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.461725 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" containerName="nova-metadata-log" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.462533 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.464487 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.481453 4803 scope.go:117] "RemoveContainer" containerID="b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.482105 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.489038 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-combined-ca-bundle\") pod \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.489666 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-nova-metadata-tls-certs\") pod \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.489763 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gpgc\" (UniqueName: \"kubernetes.io/projected/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-kube-api-access-7gpgc\") pod \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.489809 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-config-data\") pod \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.489857 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-logs\") pod \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\" (UID: \"4f1ff84f-fa75-4ec3-8a8e-60a33efb107c\") " Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.490025 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf03068-465b-47ff-8616-7e2af8360631-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0cf03068-465b-47ff-8616-7e2af8360631\") " pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.490094 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mns7h\" (UniqueName: \"kubernetes.io/projected/0cf03068-465b-47ff-8616-7e2af8360631-kube-api-access-mns7h\") pod \"nova-scheduler-0\" (UID: \"0cf03068-465b-47ff-8616-7e2af8360631\") " pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.490146 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf03068-465b-47ff-8616-7e2af8360631-config-data\") pod \"nova-scheduler-0\" (UID: \"0cf03068-465b-47ff-8616-7e2af8360631\") " pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.491084 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-logs" (OuterVolumeSpecName: "logs") pod "4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" (UID: "4f1ff84f-fa75-4ec3-8a8e-60a33efb107c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.494311 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-kube-api-access-7gpgc" (OuterVolumeSpecName: "kube-api-access-7gpgc") pod "4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" (UID: "4f1ff84f-fa75-4ec3-8a8e-60a33efb107c"). InnerVolumeSpecName "kube-api-access-7gpgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.508187 4803 scope.go:117] "RemoveContainer" containerID="bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1" Jan 27 22:14:14 crc kubenswrapper[4803]: E0127 22:14:14.509469 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1\": container with ID starting with bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1 not found: ID does not exist" containerID="bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.509506 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1"} err="failed to get container status \"bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1\": rpc error: code = NotFound desc = could not find container \"bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1\": container with ID starting with bcf2ee3778c7bbcef36040af3f837be2260826f6ff90873ab611fe975b9552d1 not found: ID does not exist" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.509526 4803 scope.go:117] "RemoveContainer" containerID="b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c" Jan 27 22:14:14 crc kubenswrapper[4803]: E0127 22:14:14.509740 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c\": container with ID starting with b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c not found: ID does not exist" containerID="b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.509767 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c"} err="failed to get container status \"b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c\": rpc error: code = NotFound desc = could not find container \"b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c\": container with ID starting with b53a0f58e4ff2bf339a0d0e66057280c5568b978840c98944f1e9674518d605c not found: ID does not exist" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.545032 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" (UID: "4f1ff84f-fa75-4ec3-8a8e-60a33efb107c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.594176 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mns7h\" (UniqueName: \"kubernetes.io/projected/0cf03068-465b-47ff-8616-7e2af8360631-kube-api-access-mns7h\") pod \"nova-scheduler-0\" (UID: \"0cf03068-465b-47ff-8616-7e2af8360631\") " pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.594260 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf03068-465b-47ff-8616-7e2af8360631-config-data\") pod \"nova-scheduler-0\" (UID: \"0cf03068-465b-47ff-8616-7e2af8360631\") " pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.594397 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf03068-465b-47ff-8616-7e2af8360631-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0cf03068-465b-47ff-8616-7e2af8360631\") " pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.594483 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gpgc\" (UniqueName: \"kubernetes.io/projected/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-kube-api-access-7gpgc\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.594495 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.594504 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.600422 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-config-data" (OuterVolumeSpecName: "config-data") pod "4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" (UID: "4f1ff84f-fa75-4ec3-8a8e-60a33efb107c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.602594 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf03068-465b-47ff-8616-7e2af8360631-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0cf03068-465b-47ff-8616-7e2af8360631\") " pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.624219 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf03068-465b-47ff-8616-7e2af8360631-config-data\") pod \"nova-scheduler-0\" (UID: \"0cf03068-465b-47ff-8616-7e2af8360631\") " pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.638059 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mns7h\" (UniqueName: \"kubernetes.io/projected/0cf03068-465b-47ff-8616-7e2af8360631-kube-api-access-mns7h\") pod \"nova-scheduler-0\" (UID: \"0cf03068-465b-47ff-8616-7e2af8360631\") " pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.677040 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.689010 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" (UID: "4f1ff84f-fa75-4ec3-8a8e-60a33efb107c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.700376 4803 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.700410 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.778966 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.802321 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.805339 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/495d58f5-c8ce-44c6-a844-57ec21deb347-logs\") pod \"495d58f5-c8ce-44c6-a844-57ec21deb347\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.805402 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-internal-tls-certs\") pod \"495d58f5-c8ce-44c6-a844-57ec21deb347\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.805475 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-config-data\") pod \"495d58f5-c8ce-44c6-a844-57ec21deb347\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.805584 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-public-tls-certs\") pod \"495d58f5-c8ce-44c6-a844-57ec21deb347\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.805612 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-combined-ca-bundle\") pod \"495d58f5-c8ce-44c6-a844-57ec21deb347\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.805648 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f27qh\" (UniqueName: \"kubernetes.io/projected/495d58f5-c8ce-44c6-a844-57ec21deb347-kube-api-access-f27qh\") pod \"495d58f5-c8ce-44c6-a844-57ec21deb347\" (UID: \"495d58f5-c8ce-44c6-a844-57ec21deb347\") " Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.809031 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/495d58f5-c8ce-44c6-a844-57ec21deb347-logs" (OuterVolumeSpecName: "logs") pod "495d58f5-c8ce-44c6-a844-57ec21deb347" (UID: "495d58f5-c8ce-44c6-a844-57ec21deb347"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.830222 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495d58f5-c8ce-44c6-a844-57ec21deb347-kube-api-access-f27qh" (OuterVolumeSpecName: "kube-api-access-f27qh") pod "495d58f5-c8ce-44c6-a844-57ec21deb347" (UID: "495d58f5-c8ce-44c6-a844-57ec21deb347"). InnerVolumeSpecName "kube-api-access-f27qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.830918 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.859006 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:14:14 crc kubenswrapper[4803]: E0127 22:14:14.859578 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerName="nova-api-api" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.859600 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerName="nova-api-api" Jan 27 22:14:14 crc kubenswrapper[4803]: E0127 22:14:14.859627 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerName="nova-api-log" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.859633 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerName="nova-api-log" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.859898 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerName="nova-api-log" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.859912 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="495d58f5-c8ce-44c6-a844-57ec21deb347" containerName="nova-api-api" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.861686 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.866988 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.867170 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.871797 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.908715 4803 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/495d58f5-c8ce-44c6-a844-57ec21deb347-logs\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.908742 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f27qh\" (UniqueName: \"kubernetes.io/projected/495d58f5-c8ce-44c6-a844-57ec21deb347-kube-api-access-f27qh\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.939456 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "495d58f5-c8ce-44c6-a844-57ec21deb347" (UID: "495d58f5-c8ce-44c6-a844-57ec21deb347"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.940259 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-config-data" (OuterVolumeSpecName: "config-data") pod "495d58f5-c8ce-44c6-a844-57ec21deb347" (UID: "495d58f5-c8ce-44c6-a844-57ec21deb347"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.968095 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "495d58f5-c8ce-44c6-a844-57ec21deb347" (UID: "495d58f5-c8ce-44c6-a844-57ec21deb347"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:14 crc kubenswrapper[4803]: I0127 22:14:14.993970 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "495d58f5-c8ce-44c6-a844-57ec21deb347" (UID: "495d58f5-c8ce-44c6-a844-57ec21deb347"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.010433 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e71e913-f6e1-4eba-8c26-4ce021672adf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.010611 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e71e913-f6e1-4eba-8c26-4ce021672adf-logs\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.010737 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e71e913-f6e1-4eba-8c26-4ce021672adf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.010863 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e71e913-f6e1-4eba-8c26-4ce021672adf-config-data\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.010895 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dj7g\" (UniqueName: \"kubernetes.io/projected/3e71e913-f6e1-4eba-8c26-4ce021672adf-kube-api-access-5dj7g\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.010972 4803 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.010994 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.011006 4803 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.011017 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495d58f5-c8ce-44c6-a844-57ec21deb347-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.113398 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e71e913-f6e1-4eba-8c26-4ce021672adf-config-data\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.113457 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dj7g\" (UniqueName: \"kubernetes.io/projected/3e71e913-f6e1-4eba-8c26-4ce021672adf-kube-api-access-5dj7g\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.113517 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e71e913-f6e1-4eba-8c26-4ce021672adf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.113632 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e71e913-f6e1-4eba-8c26-4ce021672adf-logs\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.113729 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e71e913-f6e1-4eba-8c26-4ce021672adf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.117225 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e71e913-f6e1-4eba-8c26-4ce021672adf-logs\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.118224 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e71e913-f6e1-4eba-8c26-4ce021672adf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.118985 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e71e913-f6e1-4eba-8c26-4ce021672adf-config-data\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.124120 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e71e913-f6e1-4eba-8c26-4ce021672adf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.136008 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dj7g\" (UniqueName: \"kubernetes.io/projected/3e71e913-f6e1-4eba-8c26-4ce021672adf-kube-api-access-5dj7g\") pod \"nova-metadata-0\" (UID: \"3e71e913-f6e1-4eba-8c26-4ce021672adf\") " pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.201886 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.441353 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.470790 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"495d58f5-c8ce-44c6-a844-57ec21deb347","Type":"ContainerDied","Data":"1a758f086b68857a457ec68306447a2405f665530df4dc762dcbd96a49ab8afe"} Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.470797 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.470878 4803 scope.go:117] "RemoveContainer" containerID="3a021159c130f2955ab5d6fa5197efeec94ff6a0b1457745f5f27f685e425460" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.526307 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.526867 4803 scope.go:117] "RemoveContainer" containerID="a2faefefef931713e71f679be4f6f76cc6b77392d31d0b9d590be9e7a79c4566" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.553548 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.575269 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.577438 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.580788 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.580969 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.582671 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.592596 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.730995 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.731082 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgjj7\" (UniqueName: \"kubernetes.io/projected/53165673-bed1-401b-aa0d-97d59c239f08-kube-api-access-dgjj7\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.731318 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-config-data\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.731362 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53165673-bed1-401b-aa0d-97d59c239f08-logs\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.731565 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-internal-tls-certs\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.731729 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-public-tls-certs\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.751462 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.833394 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-internal-tls-certs\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.833465 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-public-tls-certs\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.833557 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.833634 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgjj7\" (UniqueName: \"kubernetes.io/projected/53165673-bed1-401b-aa0d-97d59c239f08-kube-api-access-dgjj7\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.833705 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-config-data\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.833725 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53165673-bed1-401b-aa0d-97d59c239f08-logs\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.834167 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53165673-bed1-401b-aa0d-97d59c239f08-logs\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.839104 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-internal-tls-certs\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.839794 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-public-tls-certs\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.844970 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.848189 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53165673-bed1-401b-aa0d-97d59c239f08-config-data\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.852916 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgjj7\" (UniqueName: \"kubernetes.io/projected/53165673-bed1-401b-aa0d-97d59c239f08-kube-api-access-dgjj7\") pod \"nova-api-0\" (UID: \"53165673-bed1-401b-aa0d-97d59c239f08\") " pod="openstack/nova-api-0" Jan 27 22:14:15 crc kubenswrapper[4803]: I0127 22:14:15.925459 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.318606 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="495d58f5-c8ce-44c6-a844-57ec21deb347" path="/var/lib/kubelet/pods/495d58f5-c8ce-44c6-a844-57ec21deb347/volumes" Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.319776 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f1ff84f-fa75-4ec3-8a8e-60a33efb107c" path="/var/lib/kubelet/pods/4f1ff84f-fa75-4ec3-8a8e-60a33efb107c/volumes" Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.320450 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da532001-f5a2-4f8d-99ca-c2b8b35fd77a" path="/var/lib/kubelet/pods/da532001-f5a2-4f8d-99ca-c2b8b35fd77a/volumes" Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.383775 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 22:14:16 crc kubenswrapper[4803]: W0127 22:14:16.386051 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53165673_bed1_401b_aa0d_97d59c239f08.slice/crio-577a9dfb163236769b880e640e08654a211d9126ede056a7a5147ede003e66e1 WatchSource:0}: Error finding container 577a9dfb163236769b880e640e08654a211d9126ede056a7a5147ede003e66e1: Status 404 returned error can't find the container with id 577a9dfb163236769b880e640e08654a211d9126ede056a7a5147ede003e66e1 Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.489291 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53165673-bed1-401b-aa0d-97d59c239f08","Type":"ContainerStarted","Data":"577a9dfb163236769b880e640e08654a211d9126ede056a7a5147ede003e66e1"} Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.491237 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e71e913-f6e1-4eba-8c26-4ce021672adf","Type":"ContainerStarted","Data":"3202737c97aedce9a5b9b93ef904479dfefb67b4a026fc10f26b999bc06b4c32"} Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.491271 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e71e913-f6e1-4eba-8c26-4ce021672adf","Type":"ContainerStarted","Data":"1248e618199360a1d63f87124c334540da5e616a72d9e8014d17159de80d388f"} Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.491284 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e71e913-f6e1-4eba-8c26-4ce021672adf","Type":"ContainerStarted","Data":"4054d684aa14c716f368be639e2068b0127e7576a90a38718446c9ca28c5b87d"} Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.492819 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0cf03068-465b-47ff-8616-7e2af8360631","Type":"ContainerStarted","Data":"e2ec114654394a353fc4ec0466c2dea8c5eea4d4910f8a1d9e1b81fb602caa33"} Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.492883 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0cf03068-465b-47ff-8616-7e2af8360631","Type":"ContainerStarted","Data":"8fb384880b24a1b618912417b1180062d7b02e37d285306de5e0ddcf2d01926d"} Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.513645 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.513619588 podStartE2EDuration="2.513619588s" podCreationTimestamp="2026-01-27 22:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:14:16.511383239 +0000 UTC m=+1608.927404938" watchObservedRunningTime="2026-01-27 22:14:16.513619588 +0000 UTC m=+1608.929641297" Jan 27 22:14:16 crc kubenswrapper[4803]: I0127 22:14:16.540714 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.540689437 podStartE2EDuration="2.540689437s" podCreationTimestamp="2026-01-27 22:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:14:16.529909277 +0000 UTC m=+1608.945930986" watchObservedRunningTime="2026-01-27 22:14:16.540689437 +0000 UTC m=+1608.956711146" Jan 27 22:14:17 crc kubenswrapper[4803]: I0127 22:14:17.507150 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53165673-bed1-401b-aa0d-97d59c239f08","Type":"ContainerStarted","Data":"c695cbd24068c8abfcba7db858dbfc3d53f1d6b6d5ba82a87de4e88fb18bc329"} Jan 27 22:14:17 crc kubenswrapper[4803]: I0127 22:14:17.507522 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53165673-bed1-401b-aa0d-97d59c239f08","Type":"ContainerStarted","Data":"c2d2452ad272131c41df0e522270265d111cdeb028b3bd978b8a7723dc426612"} Jan 27 22:14:17 crc kubenswrapper[4803]: I0127 22:14:17.524101 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5240782299999998 podStartE2EDuration="2.52407823s" podCreationTimestamp="2026-01-27 22:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:14:17.522633101 +0000 UTC m=+1609.938654820" watchObservedRunningTime="2026-01-27 22:14:17.52407823 +0000 UTC m=+1609.940099949" Jan 27 22:14:19 crc kubenswrapper[4803]: I0127 22:14:19.780036 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 22:14:20 crc kubenswrapper[4803]: I0127 22:14:20.202317 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 22:14:20 crc kubenswrapper[4803]: I0127 22:14:20.202375 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 22:14:20 crc kubenswrapper[4803]: I0127 22:14:20.307249 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:14:20 crc kubenswrapper[4803]: E0127 22:14:20.307631 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:14:20 crc kubenswrapper[4803]: I0127 22:14:20.486462 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:20 crc kubenswrapper[4803]: I0127 22:14:20.543896 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:20 crc kubenswrapper[4803]: I0127 22:14:20.725395 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvf7x"] Jan 27 22:14:21 crc kubenswrapper[4803]: I0127 22:14:21.553121 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fvf7x" podUID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerName="registry-server" containerID="cri-o://7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90" gracePeriod=2 Jan 27 22:14:21 crc kubenswrapper[4803]: E0127 22:14:21.642622 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb788d72b_5d6c_4f1a_8856_cfc292f76d72.slice/crio-conmon-7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90.scope\": RecentStats: unable to find data in memory cache]" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.092392 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.179660 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxqzm\" (UniqueName: \"kubernetes.io/projected/b788d72b-5d6c-4f1a-8856-cfc292f76d72-kube-api-access-bxqzm\") pod \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.179917 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-utilities\") pod \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.180056 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-catalog-content\") pod \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\" (UID: \"b788d72b-5d6c-4f1a-8856-cfc292f76d72\") " Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.180573 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-utilities" (OuterVolumeSpecName: "utilities") pod "b788d72b-5d6c-4f1a-8856-cfc292f76d72" (UID: "b788d72b-5d6c-4f1a-8856-cfc292f76d72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.180836 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.185644 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b788d72b-5d6c-4f1a-8856-cfc292f76d72-kube-api-access-bxqzm" (OuterVolumeSpecName: "kube-api-access-bxqzm") pod "b788d72b-5d6c-4f1a-8856-cfc292f76d72" (UID: "b788d72b-5d6c-4f1a-8856-cfc292f76d72"). InnerVolumeSpecName "kube-api-access-bxqzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.234552 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b788d72b-5d6c-4f1a-8856-cfc292f76d72" (UID: "b788d72b-5d6c-4f1a-8856-cfc292f76d72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.285338 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxqzm\" (UniqueName: \"kubernetes.io/projected/b788d72b-5d6c-4f1a-8856-cfc292f76d72-kube-api-access-bxqzm\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.285440 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b788d72b-5d6c-4f1a-8856-cfc292f76d72-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.569351 4803 generic.go:334] "Generic (PLEG): container finished" podID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerID="7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90" exitCode=0 Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.569413 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf7x" event={"ID":"b788d72b-5d6c-4f1a-8856-cfc292f76d72","Type":"ContainerDied","Data":"7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90"} Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.569454 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf7x" event={"ID":"b788d72b-5d6c-4f1a-8856-cfc292f76d72","Type":"ContainerDied","Data":"67ee3c0a40a9359fb7eea4463ad815d5f6d8c4e7ab80857641ddef93fb9e5dd8"} Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.569482 4803 scope.go:117] "RemoveContainer" containerID="7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.569494 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf7x" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.606651 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvf7x"] Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.609596 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fvf7x"] Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.616236 4803 scope.go:117] "RemoveContainer" containerID="cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.654180 4803 scope.go:117] "RemoveContainer" containerID="74f237c091ef1652de9f6c289906d3b02738979f9f99a3ec0d75968dee141f97" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.722119 4803 scope.go:117] "RemoveContainer" containerID="7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90" Jan 27 22:14:22 crc kubenswrapper[4803]: E0127 22:14:22.722675 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90\": container with ID starting with 7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90 not found: ID does not exist" containerID="7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.722726 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90"} err="failed to get container status \"7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90\": rpc error: code = NotFound desc = could not find container \"7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90\": container with ID starting with 7c737bf17299d974a1ab77d1a746bbcc5f7ce0b6a5595c9e3c6ca478d142db90 not found: ID does not exist" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.722758 4803 scope.go:117] "RemoveContainer" containerID="cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a" Jan 27 22:14:22 crc kubenswrapper[4803]: E0127 22:14:22.723216 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a\": container with ID starting with cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a not found: ID does not exist" containerID="cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.723249 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a"} err="failed to get container status \"cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a\": rpc error: code = NotFound desc = could not find container \"cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a\": container with ID starting with cbe2b2447303a8af881e4f467138a2100ab6ac3e263bb13a4a047c21b04dbb5a not found: ID does not exist" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.723270 4803 scope.go:117] "RemoveContainer" containerID="74f237c091ef1652de9f6c289906d3b02738979f9f99a3ec0d75968dee141f97" Jan 27 22:14:22 crc kubenswrapper[4803]: E0127 22:14:22.723548 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74f237c091ef1652de9f6c289906d3b02738979f9f99a3ec0d75968dee141f97\": container with ID starting with 74f237c091ef1652de9f6c289906d3b02738979f9f99a3ec0d75968dee141f97 not found: ID does not exist" containerID="74f237c091ef1652de9f6c289906d3b02738979f9f99a3ec0d75968dee141f97" Jan 27 22:14:22 crc kubenswrapper[4803]: I0127 22:14:22.723593 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f237c091ef1652de9f6c289906d3b02738979f9f99a3ec0d75968dee141f97"} err="failed to get container status \"74f237c091ef1652de9f6c289906d3b02738979f9f99a3ec0d75968dee141f97\": rpc error: code = NotFound desc = could not find container \"74f237c091ef1652de9f6c289906d3b02738979f9f99a3ec0d75968dee141f97\": container with ID starting with 74f237c091ef1652de9f6c289906d3b02738979f9f99a3ec0d75968dee141f97 not found: ID does not exist" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.194653 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.325905 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" path="/var/lib/kubelet/pods/b788d72b-5d6c-4f1a-8856-cfc292f76d72/volumes" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.361173 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-config-data\") pod \"e3bc9280-942c-487a-85ef-3da17fa151ba\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.361233 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cj8l\" (UniqueName: \"kubernetes.io/projected/e3bc9280-942c-487a-85ef-3da17fa151ba-kube-api-access-5cj8l\") pod \"e3bc9280-942c-487a-85ef-3da17fa151ba\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.361355 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-scripts\") pod \"e3bc9280-942c-487a-85ef-3da17fa151ba\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.361496 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-combined-ca-bundle\") pod \"e3bc9280-942c-487a-85ef-3da17fa151ba\" (UID: \"e3bc9280-942c-487a-85ef-3da17fa151ba\") " Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.372128 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-scripts" (OuterVolumeSpecName: "scripts") pod "e3bc9280-942c-487a-85ef-3da17fa151ba" (UID: "e3bc9280-942c-487a-85ef-3da17fa151ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.381919 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3bc9280-942c-487a-85ef-3da17fa151ba-kube-api-access-5cj8l" (OuterVolumeSpecName: "kube-api-access-5cj8l") pod "e3bc9280-942c-487a-85ef-3da17fa151ba" (UID: "e3bc9280-942c-487a-85ef-3da17fa151ba"). InnerVolumeSpecName "kube-api-access-5cj8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.464648 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cj8l\" (UniqueName: \"kubernetes.io/projected/e3bc9280-942c-487a-85ef-3da17fa151ba-kube-api-access-5cj8l\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.464678 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.508771 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3bc9280-942c-487a-85ef-3da17fa151ba" (UID: "e3bc9280-942c-487a-85ef-3da17fa151ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.510528 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-config-data" (OuterVolumeSpecName: "config-data") pod "e3bc9280-942c-487a-85ef-3da17fa151ba" (UID: "e3bc9280-942c-487a-85ef-3da17fa151ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.566791 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.566839 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bc9280-942c-487a-85ef-3da17fa151ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.595916 4803 generic.go:334] "Generic (PLEG): container finished" podID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerID="70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b" exitCode=137 Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.595962 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e3bc9280-942c-487a-85ef-3da17fa151ba","Type":"ContainerDied","Data":"70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b"} Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.595985 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.596002 4803 scope.go:117] "RemoveContainer" containerID="70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.595992 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e3bc9280-942c-487a-85ef-3da17fa151ba","Type":"ContainerDied","Data":"ff17b9084cd24ee86f9a9d9e66f7f6fc037972b6ef3d2b9cdf39c4f162eecc7f"} Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.624117 4803 scope.go:117] "RemoveContainer" containerID="940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.640099 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.649653 4803 scope.go:117] "RemoveContainer" containerID="ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.661541 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.713796 4803 scope.go:117] "RemoveContainer" containerID="0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.713986 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 22:14:24 crc kubenswrapper[4803]: E0127 22:14:24.714869 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-notifier" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.714892 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-notifier" Jan 27 22:14:24 crc kubenswrapper[4803]: E0127 22:14:24.714915 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerName="extract-utilities" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.714921 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerName="extract-utilities" Jan 27 22:14:24 crc kubenswrapper[4803]: E0127 22:14:24.714943 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-api" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.714949 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-api" Jan 27 22:14:24 crc kubenswrapper[4803]: E0127 22:14:24.714980 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerName="extract-content" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.714985 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerName="extract-content" Jan 27 22:14:24 crc kubenswrapper[4803]: E0127 22:14:24.715001 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerName="registry-server" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.715010 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerName="registry-server" Jan 27 22:14:24 crc kubenswrapper[4803]: E0127 22:14:24.715019 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-listener" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.715025 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-listener" Jan 27 22:14:24 crc kubenswrapper[4803]: E0127 22:14:24.715031 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-evaluator" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.715037 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-evaluator" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.718103 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-api" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.718156 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="b788d72b-5d6c-4f1a-8856-cfc292f76d72" containerName="registry-server" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.718175 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-evaluator" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.718202 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-notifier" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.718220 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" containerName="aodh-listener" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.724620 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.727361 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.727533 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.727822 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-vtwk7" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.728204 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.728464 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.745950 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.761542 4803 scope.go:117] "RemoveContainer" containerID="70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b" Jan 27 22:14:24 crc kubenswrapper[4803]: E0127 22:14:24.765325 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b\": container with ID starting with 70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b not found: ID does not exist" containerID="70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.765368 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b"} err="failed to get container status \"70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b\": rpc error: code = NotFound desc = could not find container \"70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b\": container with ID starting with 70e92916096ef02c84fad71f03b79ea256e7ffc6e502e5024e38215fad396f1b not found: ID does not exist" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.765390 4803 scope.go:117] "RemoveContainer" containerID="940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269" Jan 27 22:14:24 crc kubenswrapper[4803]: E0127 22:14:24.766882 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269\": container with ID starting with 940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269 not found: ID does not exist" containerID="940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.766908 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269"} err="failed to get container status \"940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269\": rpc error: code = NotFound desc = could not find container \"940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269\": container with ID starting with 940e7bc9ab821b4d43ead32b8887b6ca00aae810449dbfae26f951e62e784269 not found: ID does not exist" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.766924 4803 scope.go:117] "RemoveContainer" containerID="ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d" Jan 27 22:14:24 crc kubenswrapper[4803]: E0127 22:14:24.767283 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d\": container with ID starting with ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d not found: ID does not exist" containerID="ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.767302 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d"} err="failed to get container status \"ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d\": rpc error: code = NotFound desc = could not find container \"ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d\": container with ID starting with ef8487c37a2ecbc0fd592a2447593ddb28a0e9607c04faad91e62a016c4f159d not found: ID does not exist" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.767317 4803 scope.go:117] "RemoveContainer" containerID="0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6" Jan 27 22:14:24 crc kubenswrapper[4803]: E0127 22:14:24.767643 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6\": container with ID starting with 0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6 not found: ID does not exist" containerID="0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.767692 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6"} err="failed to get container status \"0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6\": rpc error: code = NotFound desc = could not find container \"0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6\": container with ID starting with 0add3f16cdd51bc0e03c186736289d888fa1a53ef650d3db49e013462b0b50c6 not found: ID does not exist" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.780263 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.813513 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.877574 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmtgc\" (UniqueName: \"kubernetes.io/projected/d1914e01-7a22-4771-b16b-d54d6c902b67-kube-api-access-pmtgc\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.877625 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.877669 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-public-tls-certs\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.877900 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-scripts\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.878023 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-internal-tls-certs\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.878065 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-config-data\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.980151 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-scripts\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.980445 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-internal-tls-certs\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.980468 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-config-data\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.980645 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmtgc\" (UniqueName: \"kubernetes.io/projected/d1914e01-7a22-4771-b16b-d54d6c902b67-kube-api-access-pmtgc\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.980670 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.980705 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-public-tls-certs\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.983617 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-scripts\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.984215 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-internal-tls-certs\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.989461 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.990052 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-config-data\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:24 crc kubenswrapper[4803]: I0127 22:14:24.990149 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-public-tls-certs\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:25 crc kubenswrapper[4803]: I0127 22:14:25.001392 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmtgc\" (UniqueName: \"kubernetes.io/projected/d1914e01-7a22-4771-b16b-d54d6c902b67-kube-api-access-pmtgc\") pod \"aodh-0\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " pod="openstack/aodh-0" Jan 27 22:14:25 crc kubenswrapper[4803]: I0127 22:14:25.064180 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 22:14:25 crc kubenswrapper[4803]: I0127 22:14:25.202402 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 22:14:25 crc kubenswrapper[4803]: I0127 22:14:25.202771 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 22:14:25 crc kubenswrapper[4803]: I0127 22:14:25.592355 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 22:14:25 crc kubenswrapper[4803]: I0127 22:14:25.609588 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1914e01-7a22-4771-b16b-d54d6c902b67","Type":"ContainerStarted","Data":"fa200449c054acbb934fd2442d40b433a6a3104eaaaa2d75523447a5fa2f77a1"} Jan 27 22:14:25 crc kubenswrapper[4803]: I0127 22:14:25.648752 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 22:14:25 crc kubenswrapper[4803]: I0127 22:14:25.926279 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 22:14:25 crc kubenswrapper[4803]: I0127 22:14:25.926628 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 22:14:26 crc kubenswrapper[4803]: I0127 22:14:26.221077 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3e71e913-f6e1-4eba-8c26-4ce021672adf" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.10:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 22:14:26 crc kubenswrapper[4803]: I0127 22:14:26.221101 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3e71e913-f6e1-4eba-8c26-4ce021672adf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.10:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 22:14:26 crc kubenswrapper[4803]: I0127 22:14:26.320096 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3bc9280-942c-487a-85ef-3da17fa151ba" path="/var/lib/kubelet/pods/e3bc9280-942c-487a-85ef-3da17fa151ba/volumes" Jan 27 22:14:26 crc kubenswrapper[4803]: I0127 22:14:26.626176 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1914e01-7a22-4771-b16b-d54d6c902b67","Type":"ContainerStarted","Data":"48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58"} Jan 27 22:14:26 crc kubenswrapper[4803]: I0127 22:14:26.945034 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="53165673-bed1-401b-aa0d-97d59c239f08" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.11:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 22:14:26 crc kubenswrapper[4803]: I0127 22:14:26.945459 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="53165673-bed1-401b-aa0d-97d59c239f08" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.11:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 22:14:27 crc kubenswrapper[4803]: I0127 22:14:27.653430 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1914e01-7a22-4771-b16b-d54d6c902b67","Type":"ContainerStarted","Data":"9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2"} Jan 27 22:14:28 crc kubenswrapper[4803]: I0127 22:14:28.665660 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1914e01-7a22-4771-b16b-d54d6c902b67","Type":"ContainerStarted","Data":"16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de"} Jan 27 22:14:28 crc kubenswrapper[4803]: I0127 22:14:28.666000 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1914e01-7a22-4771-b16b-d54d6c902b67","Type":"ContainerStarted","Data":"3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487"} Jan 27 22:14:28 crc kubenswrapper[4803]: I0127 22:14:28.691151 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.047802321 podStartE2EDuration="4.691128852s" podCreationTimestamp="2026-01-27 22:14:24 +0000 UTC" firstStartedPulling="2026-01-27 22:14:25.587802684 +0000 UTC m=+1618.003824373" lastFinishedPulling="2026-01-27 22:14:28.231129205 +0000 UTC m=+1620.647150904" observedRunningTime="2026-01-27 22:14:28.683050034 +0000 UTC m=+1621.099071743" watchObservedRunningTime="2026-01-27 22:14:28.691128852 +0000 UTC m=+1621.107150551" Jan 27 22:14:33 crc kubenswrapper[4803]: I0127 22:14:33.660919 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 22:14:35 crc kubenswrapper[4803]: I0127 22:14:35.206903 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 22:14:35 crc kubenswrapper[4803]: I0127 22:14:35.208900 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 22:14:35 crc kubenswrapper[4803]: I0127 22:14:35.214614 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 22:14:35 crc kubenswrapper[4803]: I0127 22:14:35.306650 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:14:35 crc kubenswrapper[4803]: E0127 22:14:35.307217 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:14:35 crc kubenswrapper[4803]: I0127 22:14:35.753547 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 22:14:35 crc kubenswrapper[4803]: I0127 22:14:35.932490 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 22:14:35 crc kubenswrapper[4803]: I0127 22:14:35.933055 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 22:14:35 crc kubenswrapper[4803]: I0127 22:14:35.935763 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 22:14:35 crc kubenswrapper[4803]: I0127 22:14:35.941484 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 22:14:36 crc kubenswrapper[4803]: I0127 22:14:36.757587 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 22:14:36 crc kubenswrapper[4803]: I0127 22:14:36.766316 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 22:14:38 crc kubenswrapper[4803]: I0127 22:14:38.283111 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 22:14:38 crc kubenswrapper[4803]: I0127 22:14:38.283650 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ad3f4a0a-feb7-457e-bb68-9e0a8e420568" containerName="kube-state-metrics" containerID="cri-o://ff642124702bafef96d2171fb5b9d348c6ca8d70c0861bd1fd2117036e39846d" gracePeriod=30 Jan 27 22:14:38 crc kubenswrapper[4803]: I0127 22:14:38.563394 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 22:14:38 crc kubenswrapper[4803]: I0127 22:14:38.563584 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="4414c4e3-3baa-4339-95de-5dc17a42210b" containerName="mysqld-exporter" containerID="cri-o://e9041aa0adedea8c6f825f569298768e7816db515ada829c7de17f0e951bfa97" gracePeriod=30 Jan 27 22:14:38 crc kubenswrapper[4803]: I0127 22:14:38.791655 4803 generic.go:334] "Generic (PLEG): container finished" podID="ad3f4a0a-feb7-457e-bb68-9e0a8e420568" containerID="ff642124702bafef96d2171fb5b9d348c6ca8d70c0861bd1fd2117036e39846d" exitCode=2 Jan 27 22:14:38 crc kubenswrapper[4803]: I0127 22:14:38.791989 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ad3f4a0a-feb7-457e-bb68-9e0a8e420568","Type":"ContainerDied","Data":"ff642124702bafef96d2171fb5b9d348c6ca8d70c0861bd1fd2117036e39846d"} Jan 27 22:14:38 crc kubenswrapper[4803]: I0127 22:14:38.796751 4803 generic.go:334] "Generic (PLEG): container finished" podID="4414c4e3-3baa-4339-95de-5dc17a42210b" containerID="e9041aa0adedea8c6f825f569298768e7816db515ada829c7de17f0e951bfa97" exitCode=2 Jan 27 22:14:38 crc kubenswrapper[4803]: I0127 22:14:38.796905 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4414c4e3-3baa-4339-95de-5dc17a42210b","Type":"ContainerDied","Data":"e9041aa0adedea8c6f825f569298768e7816db515ada829c7de17f0e951bfa97"} Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.029346 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.106462 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.155562 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6wzb\" (UniqueName: \"kubernetes.io/projected/ad3f4a0a-feb7-457e-bb68-9e0a8e420568-kube-api-access-k6wzb\") pod \"ad3f4a0a-feb7-457e-bb68-9e0a8e420568\" (UID: \"ad3f4a0a-feb7-457e-bb68-9e0a8e420568\") " Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.170142 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3f4a0a-feb7-457e-bb68-9e0a8e420568-kube-api-access-k6wzb" (OuterVolumeSpecName: "kube-api-access-k6wzb") pod "ad3f4a0a-feb7-457e-bb68-9e0a8e420568" (UID: "ad3f4a0a-feb7-457e-bb68-9e0a8e420568"). InnerVolumeSpecName "kube-api-access-k6wzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.258214 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84l6c\" (UniqueName: \"kubernetes.io/projected/4414c4e3-3baa-4339-95de-5dc17a42210b-kube-api-access-84l6c\") pod \"4414c4e3-3baa-4339-95de-5dc17a42210b\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.258571 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-config-data\") pod \"4414c4e3-3baa-4339-95de-5dc17a42210b\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.258840 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-combined-ca-bundle\") pod \"4414c4e3-3baa-4339-95de-5dc17a42210b\" (UID: \"4414c4e3-3baa-4339-95de-5dc17a42210b\") " Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.259695 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6wzb\" (UniqueName: \"kubernetes.io/projected/ad3f4a0a-feb7-457e-bb68-9e0a8e420568-kube-api-access-k6wzb\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.265506 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4414c4e3-3baa-4339-95de-5dc17a42210b-kube-api-access-84l6c" (OuterVolumeSpecName: "kube-api-access-84l6c") pod "4414c4e3-3baa-4339-95de-5dc17a42210b" (UID: "4414c4e3-3baa-4339-95de-5dc17a42210b"). InnerVolumeSpecName "kube-api-access-84l6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.298493 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4414c4e3-3baa-4339-95de-5dc17a42210b" (UID: "4414c4e3-3baa-4339-95de-5dc17a42210b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.331591 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-config-data" (OuterVolumeSpecName: "config-data") pod "4414c4e3-3baa-4339-95de-5dc17a42210b" (UID: "4414c4e3-3baa-4339-95de-5dc17a42210b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.362598 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84l6c\" (UniqueName: \"kubernetes.io/projected/4414c4e3-3baa-4339-95de-5dc17a42210b-kube-api-access-84l6c\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.362641 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.362654 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4414c4e3-3baa-4339-95de-5dc17a42210b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.809469 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4414c4e3-3baa-4339-95de-5dc17a42210b","Type":"ContainerDied","Data":"3b5022d02f87d5a99a6d37ba681dc3432312c489260542280c42dc5299892437"} Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.809558 4803 scope.go:117] "RemoveContainer" containerID="e9041aa0adedea8c6f825f569298768e7816db515ada829c7de17f0e951bfa97" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.809487 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.815424 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ad3f4a0a-feb7-457e-bb68-9e0a8e420568","Type":"ContainerDied","Data":"303c3d8771d355786b72d85004e6274ea027dc732235d85840bb05c61e8b9c5c"} Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.815520 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.863652 4803 scope.go:117] "RemoveContainer" containerID="ff642124702bafef96d2171fb5b9d348c6ca8d70c0861bd1fd2117036e39846d" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.873810 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.907711 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.932570 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.942699 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 22:14:39 crc kubenswrapper[4803]: E0127 22:14:39.943256 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4414c4e3-3baa-4339-95de-5dc17a42210b" containerName="mysqld-exporter" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.943274 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4414c4e3-3baa-4339-95de-5dc17a42210b" containerName="mysqld-exporter" Jan 27 22:14:39 crc kubenswrapper[4803]: E0127 22:14:39.943307 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3f4a0a-feb7-457e-bb68-9e0a8e420568" containerName="kube-state-metrics" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.943318 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3f4a0a-feb7-457e-bb68-9e0a8e420568" containerName="kube-state-metrics" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.943599 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4414c4e3-3baa-4339-95de-5dc17a42210b" containerName="mysqld-exporter" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.943628 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad3f4a0a-feb7-457e-bb68-9e0a8e420568" containerName="kube-state-metrics" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.944458 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.949798 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.949970 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.957169 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.971526 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.983659 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.985568 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.987609 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 22:14:39 crc kubenswrapper[4803]: I0127 22:14:39.987671 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:39.999512 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.082997 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd832f4-d1c8-4283-b3cb-55cd225022e4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.083286 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-config-data\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.083414 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd832f4-d1c8-4283-b3cb-55cd225022e4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.083528 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.083623 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.083746 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk27g\" (UniqueName: \"kubernetes.io/projected/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-kube-api-access-dk27g\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.083820 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfz7f\" (UniqueName: \"kubernetes.io/projected/bfd832f4-d1c8-4283-b3cb-55cd225022e4-kube-api-access-rfz7f\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.083939 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bfd832f4-d1c8-4283-b3cb-55cd225022e4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.186062 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd832f4-d1c8-4283-b3cb-55cd225022e4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.186329 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.186437 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.186550 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk27g\" (UniqueName: \"kubernetes.io/projected/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-kube-api-access-dk27g\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.186627 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfz7f\" (UniqueName: \"kubernetes.io/projected/bfd832f4-d1c8-4283-b3cb-55cd225022e4-kube-api-access-rfz7f\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.186720 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bfd832f4-d1c8-4283-b3cb-55cd225022e4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.186821 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd832f4-d1c8-4283-b3cb-55cd225022e4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.186918 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-config-data\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.192451 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd832f4-d1c8-4283-b3cb-55cd225022e4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.193644 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bfd832f4-d1c8-4283-b3cb-55cd225022e4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.194887 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.195461 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd832f4-d1c8-4283-b3cb-55cd225022e4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.207618 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-config-data\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.208031 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.212624 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfz7f\" (UniqueName: \"kubernetes.io/projected/bfd832f4-d1c8-4283-b3cb-55cd225022e4-kube-api-access-rfz7f\") pod \"kube-state-metrics-0\" (UID: \"bfd832f4-d1c8-4283-b3cb-55cd225022e4\") " pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.215570 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk27g\" (UniqueName: \"kubernetes.io/projected/2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2-kube-api-access-dk27g\") pod \"mysqld-exporter-0\" (UID: \"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2\") " pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.265081 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.309388 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.326008 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4414c4e3-3baa-4339-95de-5dc17a42210b" path="/var/lib/kubelet/pods/4414c4e3-3baa-4339-95de-5dc17a42210b/volumes" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.327550 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad3f4a0a-feb7-457e-bb68-9e0a8e420568" path="/var/lib/kubelet/pods/ad3f4a0a-feb7-457e-bb68-9e0a8e420568/volumes" Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.798803 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.799404 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="ceilometer-central-agent" containerID="cri-o://35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b" gracePeriod=30 Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.799428 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="proxy-httpd" containerID="cri-o://fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d" gracePeriod=30 Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.799506 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="ceilometer-notification-agent" containerID="cri-o://b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d" gracePeriod=30 Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.799524 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="sg-core" containerID="cri-o://20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5" gracePeriod=30 Jan 27 22:14:40 crc kubenswrapper[4803]: W0127 22:14:40.834461 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e41c64d_3e0f_4862_9d78_1d3fd0e9fbe2.slice/crio-7de21054193df6c85a8cad522dc0429bec5e9bf677809b8aad285405d908a485 WatchSource:0}: Error finding container 7de21054193df6c85a8cad522dc0429bec5e9bf677809b8aad285405d908a485: Status 404 returned error can't find the container with id 7de21054193df6c85a8cad522dc0429bec5e9bf677809b8aad285405d908a485 Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.836574 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 22:14:40 crc kubenswrapper[4803]: I0127 22:14:40.996276 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 22:14:40 crc kubenswrapper[4803]: W0127 22:14:40.996377 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfd832f4_d1c8_4283_b3cb_55cd225022e4.slice/crio-0189794a7e8f756275b6881287d6cefa9e132bcb03cdc1f4b510d4b241df0a2c WatchSource:0}: Error finding container 0189794a7e8f756275b6881287d6cefa9e132bcb03cdc1f4b510d4b241df0a2c: Status 404 returned error can't find the container with id 0189794a7e8f756275b6881287d6cefa9e132bcb03cdc1f4b510d4b241df0a2c Jan 27 22:14:41 crc kubenswrapper[4803]: I0127 22:14:41.844731 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2","Type":"ContainerStarted","Data":"7de21054193df6c85a8cad522dc0429bec5e9bf677809b8aad285405d908a485"} Jan 27 22:14:41 crc kubenswrapper[4803]: I0127 22:14:41.848191 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bfd832f4-d1c8-4283-b3cb-55cd225022e4","Type":"ContainerStarted","Data":"0189794a7e8f756275b6881287d6cefa9e132bcb03cdc1f4b510d4b241df0a2c"} Jan 27 22:14:41 crc kubenswrapper[4803]: I0127 22:14:41.853181 4803 generic.go:334] "Generic (PLEG): container finished" podID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerID="fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d" exitCode=0 Jan 27 22:14:41 crc kubenswrapper[4803]: I0127 22:14:41.853365 4803 generic.go:334] "Generic (PLEG): container finished" podID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerID="20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5" exitCode=2 Jan 27 22:14:41 crc kubenswrapper[4803]: I0127 22:14:41.853485 4803 generic.go:334] "Generic (PLEG): container finished" podID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerID="35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b" exitCode=0 Jan 27 22:14:41 crc kubenswrapper[4803]: I0127 22:14:41.853431 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e10f98c4-901d-4c47-b9a7-67fb0521d204","Type":"ContainerDied","Data":"fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d"} Jan 27 22:14:41 crc kubenswrapper[4803]: I0127 22:14:41.853774 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e10f98c4-901d-4c47-b9a7-67fb0521d204","Type":"ContainerDied","Data":"20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5"} Jan 27 22:14:41 crc kubenswrapper[4803]: I0127 22:14:41.853941 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e10f98c4-901d-4c47-b9a7-67fb0521d204","Type":"ContainerDied","Data":"35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b"} Jan 27 22:14:42 crc kubenswrapper[4803]: I0127 22:14:42.870593 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2","Type":"ContainerStarted","Data":"69c093b1b98338aaefa0d905e1daf4c25688c3894b44a38f58e5d4b9683b64cb"} Jan 27 22:14:42 crc kubenswrapper[4803]: I0127 22:14:42.893529 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.0847433300000002 podStartE2EDuration="3.893510291s" podCreationTimestamp="2026-01-27 22:14:39 +0000 UTC" firstStartedPulling="2026-01-27 22:14:40.837673315 +0000 UTC m=+1633.253695014" lastFinishedPulling="2026-01-27 22:14:41.646440266 +0000 UTC m=+1634.062461975" observedRunningTime="2026-01-27 22:14:42.887215452 +0000 UTC m=+1635.303237151" watchObservedRunningTime="2026-01-27 22:14:42.893510291 +0000 UTC m=+1635.309531990" Jan 27 22:14:43 crc kubenswrapper[4803]: I0127 22:14:43.883079 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bfd832f4-d1c8-4283-b3cb-55cd225022e4","Type":"ContainerStarted","Data":"9aa9015b9af26e69bbd95056c17e5027d063ba9ba5d845de8c477ebe94994e43"} Jan 27 22:14:43 crc kubenswrapper[4803]: I0127 22:14:43.900345 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.796717689 podStartE2EDuration="4.900322441s" podCreationTimestamp="2026-01-27 22:14:39 +0000 UTC" firstStartedPulling="2026-01-27 22:14:40.99840866 +0000 UTC m=+1633.414430359" lastFinishedPulling="2026-01-27 22:14:43.102013402 +0000 UTC m=+1635.518035111" observedRunningTime="2026-01-27 22:14:43.897426463 +0000 UTC m=+1636.313448172" watchObservedRunningTime="2026-01-27 22:14:43.900322441 +0000 UTC m=+1636.316344140" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.717421 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.798481 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-combined-ca-bundle\") pod \"e10f98c4-901d-4c47-b9a7-67fb0521d204\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.798820 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-config-data\") pod \"e10f98c4-901d-4c47-b9a7-67fb0521d204\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.799023 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-sg-core-conf-yaml\") pod \"e10f98c4-901d-4c47-b9a7-67fb0521d204\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.799119 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-log-httpd\") pod \"e10f98c4-901d-4c47-b9a7-67fb0521d204\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.799255 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbt7z\" (UniqueName: \"kubernetes.io/projected/e10f98c4-901d-4c47-b9a7-67fb0521d204-kube-api-access-mbt7z\") pod \"e10f98c4-901d-4c47-b9a7-67fb0521d204\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.799359 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-scripts\") pod \"e10f98c4-901d-4c47-b9a7-67fb0521d204\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.799493 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-run-httpd\") pod \"e10f98c4-901d-4c47-b9a7-67fb0521d204\" (UID: \"e10f98c4-901d-4c47-b9a7-67fb0521d204\") " Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.799886 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e10f98c4-901d-4c47-b9a7-67fb0521d204" (UID: "e10f98c4-901d-4c47-b9a7-67fb0521d204"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.800322 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.801246 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e10f98c4-901d-4c47-b9a7-67fb0521d204" (UID: "e10f98c4-901d-4c47-b9a7-67fb0521d204"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.808943 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-scripts" (OuterVolumeSpecName: "scripts") pod "e10f98c4-901d-4c47-b9a7-67fb0521d204" (UID: "e10f98c4-901d-4c47-b9a7-67fb0521d204"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.815224 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e10f98c4-901d-4c47-b9a7-67fb0521d204-kube-api-access-mbt7z" (OuterVolumeSpecName: "kube-api-access-mbt7z") pod "e10f98c4-901d-4c47-b9a7-67fb0521d204" (UID: "e10f98c4-901d-4c47-b9a7-67fb0521d204"). InnerVolumeSpecName "kube-api-access-mbt7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.851334 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e10f98c4-901d-4c47-b9a7-67fb0521d204" (UID: "e10f98c4-901d-4c47-b9a7-67fb0521d204"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.902054 4803 generic.go:334] "Generic (PLEG): container finished" podID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerID="b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d" exitCode=0 Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.902414 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e10f98c4-901d-4c47-b9a7-67fb0521d204","Type":"ContainerDied","Data":"b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d"} Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.902492 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.903222 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e10f98c4-901d-4c47-b9a7-67fb0521d204","Type":"ContainerDied","Data":"033491ae1555bdbabf17739c816b1cab2febf8780b47c9cefe1a6097aedbbd63"} Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.903259 4803 scope.go:117] "RemoveContainer" containerID="fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.903952 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.905497 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbt7z\" (UniqueName: \"kubernetes.io/projected/e10f98c4-901d-4c47-b9a7-67fb0521d204-kube-api-access-mbt7z\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.905529 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.905542 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e10f98c4-901d-4c47-b9a7-67fb0521d204-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.905555 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.918123 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e10f98c4-901d-4c47-b9a7-67fb0521d204" (UID: "e10f98c4-901d-4c47-b9a7-67fb0521d204"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.949784 4803 scope.go:117] "RemoveContainer" containerID="20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.970700 4803 scope.go:117] "RemoveContainer" containerID="b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d" Jan 27 22:14:44 crc kubenswrapper[4803]: I0127 22:14:44.990905 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-config-data" (OuterVolumeSpecName: "config-data") pod "e10f98c4-901d-4c47-b9a7-67fb0521d204" (UID: "e10f98c4-901d-4c47-b9a7-67fb0521d204"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.006906 4803 scope.go:117] "RemoveContainer" containerID="35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.009069 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.009627 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10f98c4-901d-4c47-b9a7-67fb0521d204-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.045164 4803 scope.go:117] "RemoveContainer" containerID="fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d" Jan 27 22:14:45 crc kubenswrapper[4803]: E0127 22:14:45.048315 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d\": container with ID starting with fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d not found: ID does not exist" containerID="fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.051073 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d"} err="failed to get container status \"fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d\": rpc error: code = NotFound desc = could not find container \"fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d\": container with ID starting with fa29b9448d06da71da91ecc35fc1b3ac10889be006e397f812bc8ba57c742d6d not found: ID does not exist" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.051373 4803 scope.go:117] "RemoveContainer" containerID="20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5" Jan 27 22:14:45 crc kubenswrapper[4803]: E0127 22:14:45.054068 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5\": container with ID starting with 20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5 not found: ID does not exist" containerID="20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.054106 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5"} err="failed to get container status \"20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5\": rpc error: code = NotFound desc = could not find container \"20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5\": container with ID starting with 20df74d5acd0af4fab10fb4d334300283954cdf7688a6ad7ee3524f030adada5 not found: ID does not exist" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.054131 4803 scope.go:117] "RemoveContainer" containerID="b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d" Jan 27 22:14:45 crc kubenswrapper[4803]: E0127 22:14:45.055096 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d\": container with ID starting with b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d not found: ID does not exist" containerID="b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.055250 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d"} err="failed to get container status \"b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d\": rpc error: code = NotFound desc = could not find container \"b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d\": container with ID starting with b06178357f5bdd3e9f5c8813fb9ca040d5c4c1471d2ecc73a30f7650fb14f31d not found: ID does not exist" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.055384 4803 scope.go:117] "RemoveContainer" containerID="35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b" Jan 27 22:14:45 crc kubenswrapper[4803]: E0127 22:14:45.056285 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b\": container with ID starting with 35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b not found: ID does not exist" containerID="35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.056382 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b"} err="failed to get container status \"35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b\": rpc error: code = NotFound desc = could not find container \"35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b\": container with ID starting with 35393372e3f3f24e5d835e64e0e133b9007e6a7af0edb1aa60b1a1b99175af5b not found: ID does not exist" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.292484 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.308326 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.321384 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:45 crc kubenswrapper[4803]: E0127 22:14:45.322016 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="ceilometer-notification-agent" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.322033 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="ceilometer-notification-agent" Jan 27 22:14:45 crc kubenswrapper[4803]: E0127 22:14:45.322046 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="ceilometer-central-agent" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.322053 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="ceilometer-central-agent" Jan 27 22:14:45 crc kubenswrapper[4803]: E0127 22:14:45.322064 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="proxy-httpd" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.322071 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="proxy-httpd" Jan 27 22:14:45 crc kubenswrapper[4803]: E0127 22:14:45.322113 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="sg-core" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.322119 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="sg-core" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.322314 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="sg-core" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.322328 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="ceilometer-notification-agent" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.322345 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="ceilometer-central-agent" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.322359 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" containerName="proxy-httpd" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.324355 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.326749 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.327023 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.329348 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.332008 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.416551 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.416637 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.416914 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-scripts\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.417026 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-run-httpd\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.417227 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slp8j\" (UniqueName: \"kubernetes.io/projected/8dcc3423-510a-4eb7-b290-2be50e295ec0-kube-api-access-slp8j\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.417447 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-log-httpd\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.417644 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-config-data\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.417698 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.520093 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-run-httpd\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.520213 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slp8j\" (UniqueName: \"kubernetes.io/projected/8dcc3423-510a-4eb7-b290-2be50e295ec0-kube-api-access-slp8j\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.520257 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-log-httpd\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.520311 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-config-data\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.520337 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.520418 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.520475 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.520568 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-scripts\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.520697 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-run-httpd\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.522316 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-log-httpd\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.525768 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.526802 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-config-data\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.532928 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.533512 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.538244 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slp8j\" (UniqueName: \"kubernetes.io/projected/8dcc3423-510a-4eb7-b290-2be50e295ec0-kube-api-access-slp8j\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.538289 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-scripts\") pod \"ceilometer-0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " pod="openstack/ceilometer-0" Jan 27 22:14:45 crc kubenswrapper[4803]: I0127 22:14:45.647123 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:46 crc kubenswrapper[4803]: I0127 22:14:46.161943 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:46 crc kubenswrapper[4803]: W0127 22:14:46.165467 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dcc3423_510a_4eb7_b290_2be50e295ec0.slice/crio-3f16ed3b68667e9b38ff323628fc378b409f166e1188bb6901bd0ee9d6cc357e WatchSource:0}: Error finding container 3f16ed3b68667e9b38ff323628fc378b409f166e1188bb6901bd0ee9d6cc357e: Status 404 returned error can't find the container with id 3f16ed3b68667e9b38ff323628fc378b409f166e1188bb6901bd0ee9d6cc357e Jan 27 22:14:46 crc kubenswrapper[4803]: I0127 22:14:46.318934 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e10f98c4-901d-4c47-b9a7-67fb0521d204" path="/var/lib/kubelet/pods/e10f98c4-901d-4c47-b9a7-67fb0521d204/volumes" Jan 27 22:14:46 crc kubenswrapper[4803]: I0127 22:14:46.941008 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dcc3423-510a-4eb7-b290-2be50e295ec0","Type":"ContainerStarted","Data":"1686167c8125031e7eb7397d44805a8ddbe667c68e46a554e3f1249e06974a6c"} Jan 27 22:14:46 crc kubenswrapper[4803]: I0127 22:14:46.941298 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dcc3423-510a-4eb7-b290-2be50e295ec0","Type":"ContainerStarted","Data":"3f16ed3b68667e9b38ff323628fc378b409f166e1188bb6901bd0ee9d6cc357e"} Jan 27 22:14:47 crc kubenswrapper[4803]: I0127 22:14:47.308023 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:14:47 crc kubenswrapper[4803]: E0127 22:14:47.308537 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:14:47 crc kubenswrapper[4803]: I0127 22:14:47.860102 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-xmlbc"] Jan 27 22:14:47 crc kubenswrapper[4803]: I0127 22:14:47.870299 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-xmlbc"] Jan 27 22:14:47 crc kubenswrapper[4803]: I0127 22:14:47.923153 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-sjbk6"] Jan 27 22:14:47 crc kubenswrapper[4803]: I0127 22:14:47.924814 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sjbk6" Jan 27 22:14:47 crc kubenswrapper[4803]: I0127 22:14:47.951601 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-sjbk6"] Jan 27 22:14:47 crc kubenswrapper[4803]: I0127 22:14:47.971777 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dcc3423-510a-4eb7-b290-2be50e295ec0","Type":"ContainerStarted","Data":"92f2724d4998486b192c2633e9a346c3ed3490a72a16ccb3f138dd5bef795ed4"} Jan 27 22:14:47 crc kubenswrapper[4803]: I0127 22:14:47.982565 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-config-data\") pod \"heat-db-sync-sjbk6\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " pod="openstack/heat-db-sync-sjbk6" Jan 27 22:14:47 crc kubenswrapper[4803]: I0127 22:14:47.982625 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-combined-ca-bundle\") pod \"heat-db-sync-sjbk6\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " pod="openstack/heat-db-sync-sjbk6" Jan 27 22:14:47 crc kubenswrapper[4803]: I0127 22:14:47.982668 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4bc8\" (UniqueName: \"kubernetes.io/projected/dfdeec7e-e323-4a7a-9a5c-badcec773861-kube-api-access-j4bc8\") pod \"heat-db-sync-sjbk6\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " pod="openstack/heat-db-sync-sjbk6" Jan 27 22:14:48 crc kubenswrapper[4803]: I0127 22:14:48.085141 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-config-data\") pod \"heat-db-sync-sjbk6\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " pod="openstack/heat-db-sync-sjbk6" Jan 27 22:14:48 crc kubenswrapper[4803]: I0127 22:14:48.085192 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-combined-ca-bundle\") pod \"heat-db-sync-sjbk6\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " pod="openstack/heat-db-sync-sjbk6" Jan 27 22:14:48 crc kubenswrapper[4803]: I0127 22:14:48.085239 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4bc8\" (UniqueName: \"kubernetes.io/projected/dfdeec7e-e323-4a7a-9a5c-badcec773861-kube-api-access-j4bc8\") pod \"heat-db-sync-sjbk6\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " pod="openstack/heat-db-sync-sjbk6" Jan 27 22:14:48 crc kubenswrapper[4803]: I0127 22:14:48.090808 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-combined-ca-bundle\") pod \"heat-db-sync-sjbk6\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " pod="openstack/heat-db-sync-sjbk6" Jan 27 22:14:48 crc kubenswrapper[4803]: I0127 22:14:48.092704 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-config-data\") pod \"heat-db-sync-sjbk6\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " pod="openstack/heat-db-sync-sjbk6" Jan 27 22:14:48 crc kubenswrapper[4803]: I0127 22:14:48.103237 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4bc8\" (UniqueName: \"kubernetes.io/projected/dfdeec7e-e323-4a7a-9a5c-badcec773861-kube-api-access-j4bc8\") pod \"heat-db-sync-sjbk6\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " pod="openstack/heat-db-sync-sjbk6" Jan 27 22:14:48 crc kubenswrapper[4803]: I0127 22:14:48.255622 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sjbk6" Jan 27 22:14:48 crc kubenswrapper[4803]: I0127 22:14:48.339333 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c9761e2-3f55-4c05-be61-594fa9592844" path="/var/lib/kubelet/pods/6c9761e2-3f55-4c05-be61-594fa9592844/volumes" Jan 27 22:14:48 crc kubenswrapper[4803]: I0127 22:14:48.827894 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-sjbk6"] Jan 27 22:14:48 crc kubenswrapper[4803]: I0127 22:14:48.984540 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sjbk6" event={"ID":"dfdeec7e-e323-4a7a-9a5c-badcec773861","Type":"ContainerStarted","Data":"5dccb3d3f759be14e4491e8cba0185f9084561408a867c67741d4fdf615fa415"} Jan 27 22:14:48 crc kubenswrapper[4803]: I0127 22:14:48.986832 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dcc3423-510a-4eb7-b290-2be50e295ec0","Type":"ContainerStarted","Data":"6c5046a90e8a74bd6800c2775310f6f39ee7e236080a737512eb7cb5b8c7f4b9"} Jan 27 22:14:50 crc kubenswrapper[4803]: I0127 22:14:50.329241 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 22:14:50 crc kubenswrapper[4803]: I0127 22:14:50.451884 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 22:14:51 crc kubenswrapper[4803]: I0127 22:14:51.035117 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dcc3423-510a-4eb7-b290-2be50e295ec0","Type":"ContainerStarted","Data":"fb8a8fcf9e69d17997f6257069d172c6090a990f2d5057ba56f93aae50109f6e"} Jan 27 22:14:51 crc kubenswrapper[4803]: I0127 22:14:51.035870 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 22:14:51 crc kubenswrapper[4803]: I0127 22:14:51.539367 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.63244091 podStartE2EDuration="6.539345612s" podCreationTimestamp="2026-01-27 22:14:45 +0000 UTC" firstStartedPulling="2026-01-27 22:14:46.168238263 +0000 UTC m=+1638.584259962" lastFinishedPulling="2026-01-27 22:14:50.075142965 +0000 UTC m=+1642.491164664" observedRunningTime="2026-01-27 22:14:51.069981173 +0000 UTC m=+1643.486002872" watchObservedRunningTime="2026-01-27 22:14:51.539345612 +0000 UTC m=+1643.955367311" Jan 27 22:14:51 crc kubenswrapper[4803]: I0127 22:14:51.545753 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 22:14:52 crc kubenswrapper[4803]: I0127 22:14:52.061990 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:53 crc kubenswrapper[4803]: I0127 22:14:53.060030 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="ceilometer-central-agent" containerID="cri-o://1686167c8125031e7eb7397d44805a8ddbe667c68e46a554e3f1249e06974a6c" gracePeriod=30 Jan 27 22:14:53 crc kubenswrapper[4803]: I0127 22:14:53.060073 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="sg-core" containerID="cri-o://6c5046a90e8a74bd6800c2775310f6f39ee7e236080a737512eb7cb5b8c7f4b9" gracePeriod=30 Jan 27 22:14:53 crc kubenswrapper[4803]: I0127 22:14:53.060052 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="proxy-httpd" containerID="cri-o://fb8a8fcf9e69d17997f6257069d172c6090a990f2d5057ba56f93aae50109f6e" gracePeriod=30 Jan 27 22:14:53 crc kubenswrapper[4803]: I0127 22:14:53.060098 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="ceilometer-notification-agent" containerID="cri-o://92f2724d4998486b192c2633e9a346c3ed3490a72a16ccb3f138dd5bef795ed4" gracePeriod=30 Jan 27 22:14:54 crc kubenswrapper[4803]: I0127 22:14:54.074784 4803 generic.go:334] "Generic (PLEG): container finished" podID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerID="fb8a8fcf9e69d17997f6257069d172c6090a990f2d5057ba56f93aae50109f6e" exitCode=0 Jan 27 22:14:54 crc kubenswrapper[4803]: I0127 22:14:54.074823 4803 generic.go:334] "Generic (PLEG): container finished" podID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerID="6c5046a90e8a74bd6800c2775310f6f39ee7e236080a737512eb7cb5b8c7f4b9" exitCode=2 Jan 27 22:14:54 crc kubenswrapper[4803]: I0127 22:14:54.074832 4803 generic.go:334] "Generic (PLEG): container finished" podID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerID="92f2724d4998486b192c2633e9a346c3ed3490a72a16ccb3f138dd5bef795ed4" exitCode=0 Jan 27 22:14:54 crc kubenswrapper[4803]: I0127 22:14:54.074839 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dcc3423-510a-4eb7-b290-2be50e295ec0","Type":"ContainerDied","Data":"fb8a8fcf9e69d17997f6257069d172c6090a990f2d5057ba56f93aae50109f6e"} Jan 27 22:14:54 crc kubenswrapper[4803]: I0127 22:14:54.074942 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dcc3423-510a-4eb7-b290-2be50e295ec0","Type":"ContainerDied","Data":"6c5046a90e8a74bd6800c2775310f6f39ee7e236080a737512eb7cb5b8c7f4b9"} Jan 27 22:14:54 crc kubenswrapper[4803]: I0127 22:14:54.074956 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dcc3423-510a-4eb7-b290-2be50e295ec0","Type":"ContainerDied","Data":"92f2724d4998486b192c2633e9a346c3ed3490a72a16ccb3f138dd5bef795ed4"} Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.093805 4803 generic.go:334] "Generic (PLEG): container finished" podID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerID="1686167c8125031e7eb7397d44805a8ddbe667c68e46a554e3f1249e06974a6c" exitCode=0 Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.093856 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dcc3423-510a-4eb7-b290-2be50e295ec0","Type":"ContainerDied","Data":"1686167c8125031e7eb7397d44805a8ddbe667c68e46a554e3f1249e06974a6c"} Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.395276 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.536118 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-scripts\") pod \"8dcc3423-510a-4eb7-b290-2be50e295ec0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.536191 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-log-httpd\") pod \"8dcc3423-510a-4eb7-b290-2be50e295ec0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.536240 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-run-httpd\") pod \"8dcc3423-510a-4eb7-b290-2be50e295ec0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.536432 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slp8j\" (UniqueName: \"kubernetes.io/projected/8dcc3423-510a-4eb7-b290-2be50e295ec0-kube-api-access-slp8j\") pod \"8dcc3423-510a-4eb7-b290-2be50e295ec0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.536517 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-ceilometer-tls-certs\") pod \"8dcc3423-510a-4eb7-b290-2be50e295ec0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.536558 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-combined-ca-bundle\") pod \"8dcc3423-510a-4eb7-b290-2be50e295ec0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.536583 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-sg-core-conf-yaml\") pod \"8dcc3423-510a-4eb7-b290-2be50e295ec0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.536603 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-config-data\") pod \"8dcc3423-510a-4eb7-b290-2be50e295ec0\" (UID: \"8dcc3423-510a-4eb7-b290-2be50e295ec0\") " Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.536756 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8dcc3423-510a-4eb7-b290-2be50e295ec0" (UID: "8dcc3423-510a-4eb7-b290-2be50e295ec0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.537047 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8dcc3423-510a-4eb7-b290-2be50e295ec0" (UID: "8dcc3423-510a-4eb7-b290-2be50e295ec0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.537449 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.537468 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dcc3423-510a-4eb7-b290-2be50e295ec0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.544059 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-scripts" (OuterVolumeSpecName: "scripts") pod "8dcc3423-510a-4eb7-b290-2be50e295ec0" (UID: "8dcc3423-510a-4eb7-b290-2be50e295ec0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.547974 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dcc3423-510a-4eb7-b290-2be50e295ec0-kube-api-access-slp8j" (OuterVolumeSpecName: "kube-api-access-slp8j") pod "8dcc3423-510a-4eb7-b290-2be50e295ec0" (UID: "8dcc3423-510a-4eb7-b290-2be50e295ec0"). InnerVolumeSpecName "kube-api-access-slp8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.573633 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8dcc3423-510a-4eb7-b290-2be50e295ec0" (UID: "8dcc3423-510a-4eb7-b290-2be50e295ec0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.627290 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8dcc3423-510a-4eb7-b290-2be50e295ec0" (UID: "8dcc3423-510a-4eb7-b290-2be50e295ec0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.639342 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slp8j\" (UniqueName: \"kubernetes.io/projected/8dcc3423-510a-4eb7-b290-2be50e295ec0-kube-api-access-slp8j\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.639380 4803 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.639394 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.639407 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.682537 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dcc3423-510a-4eb7-b290-2be50e295ec0" (UID: "8dcc3423-510a-4eb7-b290-2be50e295ec0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.703574 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-config-data" (OuterVolumeSpecName: "config-data") pod "8dcc3423-510a-4eb7-b290-2be50e295ec0" (UID: "8dcc3423-510a-4eb7-b290-2be50e295ec0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.741952 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:55 crc kubenswrapper[4803]: I0127 22:14:55.741993 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dcc3423-510a-4eb7-b290-2be50e295ec0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.006170 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="993ad889-77c3-480e-8b5b-985766d488be" containerName="rabbitmq" containerID="cri-o://c8f3ab958869c4a1752ed84096c00c9007044ec28b1cea82af402fadc15df134" gracePeriod=604795 Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.108315 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dcc3423-510a-4eb7-b290-2be50e295ec0","Type":"ContainerDied","Data":"3f16ed3b68667e9b38ff323628fc378b409f166e1188bb6901bd0ee9d6cc357e"} Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.108373 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.108383 4803 scope.go:117] "RemoveContainer" containerID="fb8a8fcf9e69d17997f6257069d172c6090a990f2d5057ba56f93aae50109f6e" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.146081 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.152350 4803 scope.go:117] "RemoveContainer" containerID="6c5046a90e8a74bd6800c2775310f6f39ee7e236080a737512eb7cb5b8c7f4b9" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.161039 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.184891 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:56 crc kubenswrapper[4803]: E0127 22:14:56.185425 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="ceilometer-notification-agent" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.185438 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="ceilometer-notification-agent" Jan 27 22:14:56 crc kubenswrapper[4803]: E0127 22:14:56.185449 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="proxy-httpd" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.185455 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="proxy-httpd" Jan 27 22:14:56 crc kubenswrapper[4803]: E0127 22:14:56.185467 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="sg-core" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.185475 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="sg-core" Jan 27 22:14:56 crc kubenswrapper[4803]: E0127 22:14:56.185487 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="ceilometer-central-agent" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.185493 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="ceilometer-central-agent" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.185728 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="sg-core" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.185741 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="proxy-httpd" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.185752 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="ceilometer-central-agent" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.185765 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" containerName="ceilometer-notification-agent" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.187795 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.192149 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.192198 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.193595 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.202051 4803 scope.go:117] "RemoveContainer" containerID="92f2724d4998486b192c2633e9a346c3ed3490a72a16ccb3f138dd5bef795ed4" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.211972 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.227417 4803 scope.go:117] "RemoveContainer" containerID="1686167c8125031e7eb7397d44805a8ddbe667c68e46a554e3f1249e06974a6c" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.250807 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.251074 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-scripts\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.251139 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.251170 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-config-data\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.251250 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-run-httpd\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.251281 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.251335 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swmgw\" (UniqueName: \"kubernetes.io/projected/fbed465b-e99e-4ef2-8217-f363bd3ec042-kube-api-access-swmgw\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.251402 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-log-httpd\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.319323 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dcc3423-510a-4eb7-b290-2be50e295ec0" path="/var/lib/kubelet/pods/8dcc3423-510a-4eb7-b290-2be50e295ec0/volumes" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.353672 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-scripts\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.353732 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.353754 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-config-data\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.353802 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-run-httpd\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.353827 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.353879 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swmgw\" (UniqueName: \"kubernetes.io/projected/fbed465b-e99e-4ef2-8217-f363bd3ec042-kube-api-access-swmgw\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.353919 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-log-httpd\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.353974 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.356069 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-run-httpd\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.356458 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-log-httpd\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.358913 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.359212 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.359398 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-scripts\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.359989 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.360637 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-config-data\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.377002 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swmgw\" (UniqueName: \"kubernetes.io/projected/fbed465b-e99e-4ef2-8217-f363bd3ec042-kube-api-access-swmgw\") pod \"ceilometer-0\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.510607 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 22:14:56 crc kubenswrapper[4803]: I0127 22:14:56.796160 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="73021b6c-3762-44f7-af8d-efd3ff4e4b7b" containerName="rabbitmq" containerID="cri-o://c5a3ecc082d0bd45b33fb4d378b55ad449b2c0268808df488b266d8add88c35e" gracePeriod=604795 Jan 27 22:14:57 crc kubenswrapper[4803]: W0127 22:14:57.101391 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbed465b_e99e_4ef2_8217_f363bd3ec042.slice/crio-79916fa7dceb5b8492e56d34f3daba340c8c9cba83c453f25b03ccd6c1d897a9 WatchSource:0}: Error finding container 79916fa7dceb5b8492e56d34f3daba340c8c9cba83c453f25b03ccd6c1d897a9: Status 404 returned error can't find the container with id 79916fa7dceb5b8492e56d34f3daba340c8c9cba83c453f25b03ccd6c1d897a9 Jan 27 22:14:57 crc kubenswrapper[4803]: I0127 22:14:57.124985 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 22:14:57 crc kubenswrapper[4803]: I0127 22:14:57.129321 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerStarted","Data":"79916fa7dceb5b8492e56d34f3daba340c8c9cba83c453f25b03ccd6c1d897a9"} Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.148950 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947"] Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.151040 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.154537 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.156318 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.166750 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947"] Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.175280 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gq78\" (UniqueName: \"kubernetes.io/projected/c23005bb-85d7-416b-8668-522a0d5785cb-kube-api-access-7gq78\") pod \"collect-profiles-29492535-h7947\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.175491 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c23005bb-85d7-416b-8668-522a0d5785cb-secret-volume\") pod \"collect-profiles-29492535-h7947\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.175601 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c23005bb-85d7-416b-8668-522a0d5785cb-config-volume\") pod \"collect-profiles-29492535-h7947\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.278531 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c23005bb-85d7-416b-8668-522a0d5785cb-secret-volume\") pod \"collect-profiles-29492535-h7947\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.278626 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c23005bb-85d7-416b-8668-522a0d5785cb-config-volume\") pod \"collect-profiles-29492535-h7947\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.278769 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gq78\" (UniqueName: \"kubernetes.io/projected/c23005bb-85d7-416b-8668-522a0d5785cb-kube-api-access-7gq78\") pod \"collect-profiles-29492535-h7947\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.279736 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c23005bb-85d7-416b-8668-522a0d5785cb-config-volume\") pod \"collect-profiles-29492535-h7947\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.291078 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c23005bb-85d7-416b-8668-522a0d5785cb-secret-volume\") pod \"collect-profiles-29492535-h7947\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.295038 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gq78\" (UniqueName: \"kubernetes.io/projected/c23005bb-85d7-416b-8668-522a0d5785cb-kube-api-access-7gq78\") pod \"collect-profiles-29492535-h7947\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.307506 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:15:00 crc kubenswrapper[4803]: E0127 22:15:00.307999 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:15:00 crc kubenswrapper[4803]: I0127 22:15:00.477716 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:03 crc kubenswrapper[4803]: I0127 22:15:03.282829 4803 generic.go:334] "Generic (PLEG): container finished" podID="993ad889-77c3-480e-8b5b-985766d488be" containerID="c8f3ab958869c4a1752ed84096c00c9007044ec28b1cea82af402fadc15df134" exitCode=0 Jan 27 22:15:03 crc kubenswrapper[4803]: I0127 22:15:03.282918 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"993ad889-77c3-480e-8b5b-985766d488be","Type":"ContainerDied","Data":"c8f3ab958869c4a1752ed84096c00c9007044ec28b1cea82af402fadc15df134"} Jan 27 22:15:03 crc kubenswrapper[4803]: I0127 22:15:03.683318 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="73021b6c-3762-44f7-af8d-efd3ff4e4b7b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Jan 27 22:15:04 crc kubenswrapper[4803]: I0127 22:15:04.299798 4803 generic.go:334] "Generic (PLEG): container finished" podID="73021b6c-3762-44f7-af8d-efd3ff4e4b7b" containerID="c5a3ecc082d0bd45b33fb4d378b55ad449b2c0268808df488b266d8add88c35e" exitCode=0 Jan 27 22:15:04 crc kubenswrapper[4803]: I0127 22:15:04.300194 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73021b6c-3762-44f7-af8d-efd3ff4e4b7b","Type":"ContainerDied","Data":"c5a3ecc082d0bd45b33fb4d378b55ad449b2c0268808df488b266d8add88c35e"} Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.453005 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-kpp6w"] Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.456816 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.462541 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.471190 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-kpp6w"] Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.558342 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kpc5\" (UniqueName: \"kubernetes.io/projected/c7f762c6-29a3-4eb1-b92a-db23c0692772-kube-api-access-7kpc5\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.558509 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.558571 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.558608 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.558679 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-config\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.559052 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.559116 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.661796 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.661875 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.661977 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kpc5\" (UniqueName: \"kubernetes.io/projected/c7f762c6-29a3-4eb1-b92a-db23c0692772-kube-api-access-7kpc5\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.662121 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.662200 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.662275 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-config\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.662295 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.662752 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.663263 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.663623 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.663816 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.664641 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.664796 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-config\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.688262 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kpc5\" (UniqueName: \"kubernetes.io/projected/c7f762c6-29a3-4eb1-b92a-db23c0692772-kube-api-access-7kpc5\") pod \"dnsmasq-dns-5b75489c6f-kpp6w\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:06 crc kubenswrapper[4803]: I0127 22:15:06.784345 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:08 crc kubenswrapper[4803]: I0127 22:15:08.212324 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="993ad889-77c3-480e-8b5b-985766d488be" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.605142 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.639678 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/993ad889-77c3-480e-8b5b-985766d488be-erlang-cookie-secret\") pod \"993ad889-77c3-480e-8b5b-985766d488be\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.639768 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-tls\") pod \"993ad889-77c3-480e-8b5b-985766d488be\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.639798 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-plugins-conf\") pod \"993ad889-77c3-480e-8b5b-985766d488be\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.639969 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-plugins\") pod \"993ad889-77c3-480e-8b5b-985766d488be\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.640029 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-confd\") pod \"993ad889-77c3-480e-8b5b-985766d488be\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.640100 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/993ad889-77c3-480e-8b5b-985766d488be-pod-info\") pod \"993ad889-77c3-480e-8b5b-985766d488be\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.640137 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-erlang-cookie\") pod \"993ad889-77c3-480e-8b5b-985766d488be\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.644319 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "993ad889-77c3-480e-8b5b-985766d488be" (UID: "993ad889-77c3-480e-8b5b-985766d488be"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.649124 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\") pod \"993ad889-77c3-480e-8b5b-985766d488be\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.649306 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22ttr\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-kube-api-access-22ttr\") pod \"993ad889-77c3-480e-8b5b-985766d488be\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.649364 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-server-conf\") pod \"993ad889-77c3-480e-8b5b-985766d488be\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.649391 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-config-data\") pod \"993ad889-77c3-480e-8b5b-985766d488be\" (UID: \"993ad889-77c3-480e-8b5b-985766d488be\") " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.650544 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "993ad889-77c3-480e-8b5b-985766d488be" (UID: "993ad889-77c3-480e-8b5b-985766d488be"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.653255 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "993ad889-77c3-480e-8b5b-985766d488be" (UID: "993ad889-77c3-480e-8b5b-985766d488be"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.653296 4803 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.653433 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.661089 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993ad889-77c3-480e-8b5b-985766d488be-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "993ad889-77c3-480e-8b5b-985766d488be" (UID: "993ad889-77c3-480e-8b5b-985766d488be"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.668198 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "993ad889-77c3-480e-8b5b-985766d488be" (UID: "993ad889-77c3-480e-8b5b-985766d488be"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.676151 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-kube-api-access-22ttr" (OuterVolumeSpecName: "kube-api-access-22ttr") pod "993ad889-77c3-480e-8b5b-985766d488be" (UID: "993ad889-77c3-480e-8b5b-985766d488be"). InnerVolumeSpecName "kube-api-access-22ttr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.677069 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/993ad889-77c3-480e-8b5b-985766d488be-pod-info" (OuterVolumeSpecName: "pod-info") pod "993ad889-77c3-480e-8b5b-985766d488be" (UID: "993ad889-77c3-480e-8b5b-985766d488be"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.733363 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67" (OuterVolumeSpecName: "persistence") pod "993ad889-77c3-480e-8b5b-985766d488be" (UID: "993ad889-77c3-480e-8b5b-985766d488be"). InnerVolumeSpecName "pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.748512 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-config-data" (OuterVolumeSpecName: "config-data") pod "993ad889-77c3-480e-8b5b-985766d488be" (UID: "993ad889-77c3-480e-8b5b-985766d488be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.753604 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-server-conf" (OuterVolumeSpecName: "server-conf") pod "993ad889-77c3-480e-8b5b-985766d488be" (UID: "993ad889-77c3-480e-8b5b-985766d488be"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.758503 4803 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\") on node \"crc\" " Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.776947 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22ttr\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-kube-api-access-22ttr\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.776990 4803 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.777003 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/993ad889-77c3-480e-8b5b-985766d488be-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.777013 4803 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/993ad889-77c3-480e-8b5b-985766d488be-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.777024 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.777034 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.777060 4803 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/993ad889-77c3-480e-8b5b-985766d488be-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.804645 4803 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.805492 4803 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67") on node "crc" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.879586 4803 reconciler_common.go:293] "Volume detached for volume \"pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.882002 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "993ad889-77c3-480e-8b5b-985766d488be" (UID: "993ad889-77c3-480e-8b5b-985766d488be"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:09 crc kubenswrapper[4803]: I0127 22:15:09.981912 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/993ad889-77c3-480e-8b5b-985766d488be-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.372403 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"993ad889-77c3-480e-8b5b-985766d488be","Type":"ContainerDied","Data":"5b63f1e6abb9bfb560d3c31928ccb4aae967fa40cbbe40a0f07963acdb9761d6"} Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.372458 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.372465 4803 scope.go:117] "RemoveContainer" containerID="c8f3ab958869c4a1752ed84096c00c9007044ec28b1cea82af402fadc15df134" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.410617 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.427462 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.445513 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 22:15:10 crc kubenswrapper[4803]: E0127 22:15:10.450578 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993ad889-77c3-480e-8b5b-985766d488be" containerName="setup-container" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.450614 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="993ad889-77c3-480e-8b5b-985766d488be" containerName="setup-container" Jan 27 22:15:10 crc kubenswrapper[4803]: E0127 22:15:10.450630 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993ad889-77c3-480e-8b5b-985766d488be" containerName="rabbitmq" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.450638 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="993ad889-77c3-480e-8b5b-985766d488be" containerName="rabbitmq" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.451025 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="993ad889-77c3-480e-8b5b-985766d488be" containerName="rabbitmq" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.452351 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.461361 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.602421 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3998c673-ac46-4c45-a424-a92a7e88853c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.602750 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3998c673-ac46-4c45-a424-a92a7e88853c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.602815 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.602865 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.602886 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.602921 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.602940 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3998c673-ac46-4c45-a424-a92a7e88853c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.602985 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3998c673-ac46-4c45-a424-a92a7e88853c-config-data\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.602999 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3998c673-ac46-4c45-a424-a92a7e88853c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.603034 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.603071 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfzj9\" (UniqueName: \"kubernetes.io/projected/3998c673-ac46-4c45-a424-a92a7e88853c-kube-api-access-sfzj9\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.704887 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3998c673-ac46-4c45-a424-a92a7e88853c-config-data\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.704926 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3998c673-ac46-4c45-a424-a92a7e88853c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.704982 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.705028 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfzj9\" (UniqueName: \"kubernetes.io/projected/3998c673-ac46-4c45-a424-a92a7e88853c-kube-api-access-sfzj9\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.705082 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3998c673-ac46-4c45-a424-a92a7e88853c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.705143 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3998c673-ac46-4c45-a424-a92a7e88853c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.705191 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.705229 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.705248 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.705284 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.705300 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3998c673-ac46-4c45-a424-a92a7e88853c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.705745 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3998c673-ac46-4c45-a424-a92a7e88853c-config-data\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.705982 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3998c673-ac46-4c45-a424-a92a7e88853c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.706169 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.706400 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.706870 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3998c673-ac46-4c45-a424-a92a7e88853c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.710764 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3998c673-ac46-4c45-a424-a92a7e88853c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.710788 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.715268 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3998c673-ac46-4c45-a424-a92a7e88853c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.715546 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3998c673-ac46-4c45-a424-a92a7e88853c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.715624 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.715658 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4350b2c16e320f8700639adfba841d8f1a9d9743f1e242da10e10d34d90f7352/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.720839 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfzj9\" (UniqueName: \"kubernetes.io/projected/3998c673-ac46-4c45-a424-a92a7e88853c-kube-api-access-sfzj9\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.770893 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e3a394a-a356-4f4c-82f3-29d65c823f67\") pod \"rabbitmq-server-2\" (UID: \"3998c673-ac46-4c45-a424-a92a7e88853c\") " pod="openstack/rabbitmq-server-2" Jan 27 22:15:10 crc kubenswrapper[4803]: I0127 22:15:10.778628 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.394227 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73021b6c-3762-44f7-af8d-efd3ff4e4b7b","Type":"ContainerDied","Data":"6883c57f26c240ec55c24c4b1482462402722981a2e0e323cb0c53a93d307b46"} Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.394683 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6883c57f26c240ec55c24c4b1482462402722981a2e0e323cb0c53a93d307b46" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.432383 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.530932 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-plugins\") pod \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.530992 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk6n7\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-kube-api-access-vk6n7\") pod \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.531045 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-plugins-conf\") pod \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.531095 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-tls\") pod \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.531189 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-confd\") pod \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.531226 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-pod-info\") pod \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.534186 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "73021b6c-3762-44f7-af8d-efd3ff4e4b7b" (UID: "73021b6c-3762-44f7-af8d-efd3ff4e4b7b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.534363 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "73021b6c-3762-44f7-af8d-efd3ff4e4b7b" (UID: "73021b6c-3762-44f7-af8d-efd3ff4e4b7b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.535327 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\") pod \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.535401 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-server-conf\") pod \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.535473 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-erlang-cookie\") pod \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.535492 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-config-data\") pod \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.535579 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-erlang-cookie-secret\") pod \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\" (UID: \"73021b6c-3762-44f7-af8d-efd3ff4e4b7b\") " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.536513 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.536531 4803 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.545312 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-pod-info" (OuterVolumeSpecName: "pod-info") pod "73021b6c-3762-44f7-af8d-efd3ff4e4b7b" (UID: "73021b6c-3762-44f7-af8d-efd3ff4e4b7b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.549141 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-kube-api-access-vk6n7" (OuterVolumeSpecName: "kube-api-access-vk6n7") pod "73021b6c-3762-44f7-af8d-efd3ff4e4b7b" (UID: "73021b6c-3762-44f7-af8d-efd3ff4e4b7b"). InnerVolumeSpecName "kube-api-access-vk6n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.549869 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "73021b6c-3762-44f7-af8d-efd3ff4e4b7b" (UID: "73021b6c-3762-44f7-af8d-efd3ff4e4b7b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.554972 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "73021b6c-3762-44f7-af8d-efd3ff4e4b7b" (UID: "73021b6c-3762-44f7-af8d-efd3ff4e4b7b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.559743 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "73021b6c-3762-44f7-af8d-efd3ff4e4b7b" (UID: "73021b6c-3762-44f7-af8d-efd3ff4e4b7b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.650556 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.650596 4803 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.650609 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk6n7\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-kube-api-access-vk6n7\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.650621 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.650632 4803 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.700782 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-config-data" (OuterVolumeSpecName: "config-data") pod "73021b6c-3762-44f7-af8d-efd3ff4e4b7b" (UID: "73021b6c-3762-44f7-af8d-efd3ff4e4b7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.749294 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-server-conf" (OuterVolumeSpecName: "server-conf") pod "73021b6c-3762-44f7-af8d-efd3ff4e4b7b" (UID: "73021b6c-3762-44f7-af8d-efd3ff4e4b7b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.752631 4803 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.752660 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.801004 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8" (OuterVolumeSpecName: "persistence") pod "73021b6c-3762-44f7-af8d-efd3ff4e4b7b" (UID: "73021b6c-3762-44f7-af8d-efd3ff4e4b7b"). InnerVolumeSpecName "pvc-be234930-8d42-4804-9d28-b9eb06fbaac8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.816519 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "73021b6c-3762-44f7-af8d-efd3ff4e4b7b" (UID: "73021b6c-3762-44f7-af8d-efd3ff4e4b7b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.854765 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73021b6c-3762-44f7-af8d-efd3ff4e4b7b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.854831 4803 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\") on node \"crc\" " Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.884236 4803 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.884398 4803 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-be234930-8d42-4804-9d28-b9eb06fbaac8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8") on node "crc" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.956815 4803 reconciler_common.go:293] "Volume detached for volume \"pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:11 crc kubenswrapper[4803]: I0127 22:15:11.992576 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947"] Jan 27 22:15:12 crc kubenswrapper[4803]: E0127 22:15:12.147795 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 27 22:15:12 crc kubenswrapper[4803]: E0127 22:15:12.147840 4803 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 27 22:15:12 crc kubenswrapper[4803]: E0127 22:15:12.147962 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4bc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-sjbk6_openstack(dfdeec7e-e323-4a7a-9a5c-badcec773861): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 22:15:12 crc kubenswrapper[4803]: E0127 22:15:12.151934 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-sjbk6" podUID="dfdeec7e-e323-4a7a-9a5c-badcec773861" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.307465 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:15:12 crc kubenswrapper[4803]: E0127 22:15:12.307739 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.320171 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="993ad889-77c3-480e-8b5b-985766d488be" path="/var/lib/kubelet/pods/993ad889-77c3-480e-8b5b-985766d488be/volumes" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.407340 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: E0127 22:15:12.408333 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-sjbk6" podUID="dfdeec7e-e323-4a7a-9a5c-badcec773861" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.449920 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.471564 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.485000 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 22:15:12 crc kubenswrapper[4803]: E0127 22:15:12.485690 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73021b6c-3762-44f7-af8d-efd3ff4e4b7b" containerName="setup-container" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.485721 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="73021b6c-3762-44f7-af8d-efd3ff4e4b7b" containerName="setup-container" Jan 27 22:15:12 crc kubenswrapper[4803]: E0127 22:15:12.485782 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73021b6c-3762-44f7-af8d-efd3ff4e4b7b" containerName="rabbitmq" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.485791 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="73021b6c-3762-44f7-af8d-efd3ff4e4b7b" containerName="rabbitmq" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.486121 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="73021b6c-3762-44f7-af8d-efd3ff4e4b7b" containerName="rabbitmq" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.488013 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.490156 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-p74n6" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.490952 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.491161 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.491774 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.492076 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.492386 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.492628 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.498803 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.573286 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71236ece-7761-4d82-a93c-c5b40c33660b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.573419 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71236ece-7761-4d82-a93c-c5b40c33660b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.573513 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.573562 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71236ece-7761-4d82-a93c-c5b40c33660b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.573645 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.573679 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71236ece-7761-4d82-a93c-c5b40c33660b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.573709 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhmc\" (UniqueName: \"kubernetes.io/projected/71236ece-7761-4d82-a93c-c5b40c33660b-kube-api-access-prhmc\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.573729 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.573764 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.573840 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71236ece-7761-4d82-a93c-c5b40c33660b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.573890 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.677028 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.677096 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71236ece-7761-4d82-a93c-c5b40c33660b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.677160 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.677181 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71236ece-7761-4d82-a93c-c5b40c33660b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.677207 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prhmc\" (UniqueName: \"kubernetes.io/projected/71236ece-7761-4d82-a93c-c5b40c33660b-kube-api-access-prhmc\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.677222 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.677255 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.677308 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71236ece-7761-4d82-a93c-c5b40c33660b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.677330 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.677375 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71236ece-7761-4d82-a93c-c5b40c33660b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.677436 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71236ece-7761-4d82-a93c-c5b40c33660b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.678267 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71236ece-7761-4d82-a93c-c5b40c33660b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.679432 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.679511 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.679870 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71236ece-7761-4d82-a93c-c5b40c33660b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.680118 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71236ece-7761-4d82-a93c-c5b40c33660b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.683102 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71236ece-7761-4d82-a93c-c5b40c33660b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.683307 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71236ece-7761-4d82-a93c-c5b40c33660b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.689720 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.692750 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71236ece-7761-4d82-a93c-c5b40c33660b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.693488 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.693540 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/74c5fee18023779828160eb9f7d80ed70241abf770f5ddc3a17e57a288e11748/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.698827 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prhmc\" (UniqueName: \"kubernetes.io/projected/71236ece-7761-4d82-a93c-c5b40c33660b-kube-api-access-prhmc\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.764458 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be234930-8d42-4804-9d28-b9eb06fbaac8\") pod \"rabbitmq-cell1-server-0\" (UID: \"71236ece-7761-4d82-a93c-c5b40c33660b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.766432 4803 scope.go:117] "RemoveContainer" containerID="c21b90b93949fe0dc88c565a42c81d7fafe84c23ccf407e2c619db232c66744d" Jan 27 22:15:12 crc kubenswrapper[4803]: I0127 22:15:12.815140 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:13 crc kubenswrapper[4803]: I0127 22:15:13.117486 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 22:15:13 crc kubenswrapper[4803]: I0127 22:15:13.332522 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-kpp6w"] Jan 27 22:15:13 crc kubenswrapper[4803]: I0127 22:15:13.441505 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" event={"ID":"c23005bb-85d7-416b-8668-522a0d5785cb","Type":"ContainerStarted","Data":"f9514eaf305f5ff5c180fc8954bfce02502c59d7d0e93caf2f05cb079ecd5efb"} Jan 27 22:15:13 crc kubenswrapper[4803]: I0127 22:15:13.441569 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" event={"ID":"c23005bb-85d7-416b-8668-522a0d5785cb","Type":"ContainerStarted","Data":"1f7f35b70bedfab757cb14bd590713f8be6b6b062fe635fdf8dc85364bdb4af1"} Jan 27 22:15:13 crc kubenswrapper[4803]: I0127 22:15:13.465120 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" event={"ID":"c7f762c6-29a3-4eb1-b92a-db23c0692772","Type":"ContainerStarted","Data":"89413115fb89c2ef00db7731302013770dc603ca3452683d81612fe89ea57b24"} Jan 27 22:15:13 crc kubenswrapper[4803]: I0127 22:15:13.468346 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerStarted","Data":"ec91d42bd8a135d0c614d6ed97e86acfb3222e35f87ebe79744ce38bff5ca16a"} Jan 27 22:15:13 crc kubenswrapper[4803]: W0127 22:15:13.470713 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3998c673_ac46_4c45_a424_a92a7e88853c.slice/crio-0d35b2ed0c12bed01aafac8d900e79896041b60e1235309c00702d70d8cf8d7e WatchSource:0}: Error finding container 0d35b2ed0c12bed01aafac8d900e79896041b60e1235309c00702d70d8cf8d7e: Status 404 returned error can't find the container with id 0d35b2ed0c12bed01aafac8d900e79896041b60e1235309c00702d70d8cf8d7e Jan 27 22:15:13 crc kubenswrapper[4803]: I0127 22:15:13.471067 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 22:15:13 crc kubenswrapper[4803]: I0127 22:15:13.480361 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" podStartSLOduration=13.480341401 podStartE2EDuration="13.480341401s" podCreationTimestamp="2026-01-27 22:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:15:13.458230586 +0000 UTC m=+1665.874252295" watchObservedRunningTime="2026-01-27 22:15:13.480341401 +0000 UTC m=+1665.896363100" Jan 27 22:15:13 crc kubenswrapper[4803]: W0127 22:15:13.490955 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71236ece_7761_4d82_a93c_c5b40c33660b.slice/crio-e0ee66aea7b23460d84829543bd07c7e4be6c207c08c2608a6331697060304eb WatchSource:0}: Error finding container e0ee66aea7b23460d84829543bd07c7e4be6c207c08c2608a6331697060304eb: Status 404 returned error can't find the container with id e0ee66aea7b23460d84829543bd07c7e4be6c207c08c2608a6331697060304eb Jan 27 22:15:13 crc kubenswrapper[4803]: I0127 22:15:13.503134 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 22:15:14 crc kubenswrapper[4803]: I0127 22:15:14.322919 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73021b6c-3762-44f7-af8d-efd3ff4e4b7b" path="/var/lib/kubelet/pods/73021b6c-3762-44f7-af8d-efd3ff4e4b7b/volumes" Jan 27 22:15:14 crc kubenswrapper[4803]: I0127 22:15:14.480584 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71236ece-7761-4d82-a93c-c5b40c33660b","Type":"ContainerStarted","Data":"e0ee66aea7b23460d84829543bd07c7e4be6c207c08c2608a6331697060304eb"} Jan 27 22:15:14 crc kubenswrapper[4803]: I0127 22:15:14.482026 4803 generic.go:334] "Generic (PLEG): container finished" podID="c7f762c6-29a3-4eb1-b92a-db23c0692772" containerID="a691cd7d8e417ef0da80b7e85fd607297a1f9901df980e0595cc6b63a24ccb03" exitCode=0 Jan 27 22:15:14 crc kubenswrapper[4803]: I0127 22:15:14.482084 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" event={"ID":"c7f762c6-29a3-4eb1-b92a-db23c0692772","Type":"ContainerDied","Data":"a691cd7d8e417ef0da80b7e85fd607297a1f9901df980e0595cc6b63a24ccb03"} Jan 27 22:15:14 crc kubenswrapper[4803]: I0127 22:15:14.488058 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerStarted","Data":"b6e702a4a9cc100b2b9d048d40c8a324a207b7103d2498fa6d532ec86613d573"} Jan 27 22:15:14 crc kubenswrapper[4803]: I0127 22:15:14.491372 4803 generic.go:334] "Generic (PLEG): container finished" podID="c23005bb-85d7-416b-8668-522a0d5785cb" containerID="f9514eaf305f5ff5c180fc8954bfce02502c59d7d0e93caf2f05cb079ecd5efb" exitCode=0 Jan 27 22:15:14 crc kubenswrapper[4803]: I0127 22:15:14.491453 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" event={"ID":"c23005bb-85d7-416b-8668-522a0d5785cb","Type":"ContainerDied","Data":"f9514eaf305f5ff5c180fc8954bfce02502c59d7d0e93caf2f05cb079ecd5efb"} Jan 27 22:15:14 crc kubenswrapper[4803]: I0127 22:15:14.495509 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3998c673-ac46-4c45-a424-a92a7e88853c","Type":"ContainerStarted","Data":"0d35b2ed0c12bed01aafac8d900e79896041b60e1235309c00702d70d8cf8d7e"} Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.005692 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.076834 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c23005bb-85d7-416b-8668-522a0d5785cb-config-volume\") pod \"c23005bb-85d7-416b-8668-522a0d5785cb\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.077003 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c23005bb-85d7-416b-8668-522a0d5785cb-secret-volume\") pod \"c23005bb-85d7-416b-8668-522a0d5785cb\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.077246 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gq78\" (UniqueName: \"kubernetes.io/projected/c23005bb-85d7-416b-8668-522a0d5785cb-kube-api-access-7gq78\") pod \"c23005bb-85d7-416b-8668-522a0d5785cb\" (UID: \"c23005bb-85d7-416b-8668-522a0d5785cb\") " Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.078075 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23005bb-85d7-416b-8668-522a0d5785cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "c23005bb-85d7-416b-8668-522a0d5785cb" (UID: "c23005bb-85d7-416b-8668-522a0d5785cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.086590 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c23005bb-85d7-416b-8668-522a0d5785cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c23005bb-85d7-416b-8668-522a0d5785cb" (UID: "c23005bb-85d7-416b-8668-522a0d5785cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.093977 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c23005bb-85d7-416b-8668-522a0d5785cb-kube-api-access-7gq78" (OuterVolumeSpecName: "kube-api-access-7gq78") pod "c23005bb-85d7-416b-8668-522a0d5785cb" (UID: "c23005bb-85d7-416b-8668-522a0d5785cb"). InnerVolumeSpecName "kube-api-access-7gq78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.180610 4803 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c23005bb-85d7-416b-8668-522a0d5785cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.180654 4803 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c23005bb-85d7-416b-8668-522a0d5785cb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.180667 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gq78\" (UniqueName: \"kubernetes.io/projected/c23005bb-85d7-416b-8668-522a0d5785cb-kube-api-access-7gq78\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.530714 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" event={"ID":"c23005bb-85d7-416b-8668-522a0d5785cb","Type":"ContainerDied","Data":"1f7f35b70bedfab757cb14bd590713f8be6b6b062fe635fdf8dc85364bdb4af1"} Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.530762 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f7f35b70bedfab757cb14bd590713f8be6b6b062fe635fdf8dc85364bdb4af1" Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.530735 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947" Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.533095 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3998c673-ac46-4c45-a424-a92a7e88853c","Type":"ContainerStarted","Data":"f95e4e02e69b4c4f5bff078c1a8ca39c03c86ddd342dc314ae5650ab39ca8e4f"} Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.536323 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71236ece-7761-4d82-a93c-c5b40c33660b","Type":"ContainerStarted","Data":"e77e19461594a4f676efef7952610e2e21faa837b89fd84d198368a6344ce0de"} Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.539694 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" event={"ID":"c7f762c6-29a3-4eb1-b92a-db23c0692772","Type":"ContainerStarted","Data":"3c4f7642cc225775c9dce241ccf8d153e1811139e526fff76fad6f9225bce4b1"} Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.541060 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.544257 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerStarted","Data":"f6f817e6c8bdd38c60e602da2e5dd27bd3562ef47bd8954e48d1815a4be45144"} Jan 27 22:15:16 crc kubenswrapper[4803]: I0127 22:15:16.564063 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" podStartSLOduration=10.564041293 podStartE2EDuration="10.564041293s" podCreationTimestamp="2026-01-27 22:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:15:16.56165442 +0000 UTC m=+1668.977676119" watchObservedRunningTime="2026-01-27 22:15:16.564041293 +0000 UTC m=+1668.980063002" Jan 27 22:15:17 crc kubenswrapper[4803]: I0127 22:15:17.557787 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerStarted","Data":"2aae4bcf6852b4cdf1ff3ea2493b612c2475445d9f0c50593ef5735371daed0b"} Jan 27 22:15:17 crc kubenswrapper[4803]: I0127 22:15:17.558304 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 22:15:17 crc kubenswrapper[4803]: I0127 22:15:17.584856 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.995727623 podStartE2EDuration="21.5848251s" podCreationTimestamp="2026-01-27 22:14:56 +0000 UTC" firstStartedPulling="2026-01-27 22:14:57.105982632 +0000 UTC m=+1649.522004331" lastFinishedPulling="2026-01-27 22:15:16.695080109 +0000 UTC m=+1669.111101808" observedRunningTime="2026-01-27 22:15:17.579177358 +0000 UTC m=+1669.995199077" watchObservedRunningTime="2026-01-27 22:15:17.5848251 +0000 UTC m=+1670.000846799" Jan 27 22:15:21 crc kubenswrapper[4803]: I0127 22:15:21.787074 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:21 crc kubenswrapper[4803]: I0127 22:15:21.912802 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-cbgct"] Jan 27 22:15:21 crc kubenswrapper[4803]: I0127 22:15:21.913075 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" podUID="29833af4-166d-4666-a071-f3f7e0d4ac91" containerName="dnsmasq-dns" containerID="cri-o://0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e" gracePeriod=10 Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.109825 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-lzmj5"] Jan 27 22:15:22 crc kubenswrapper[4803]: E0127 22:15:22.110769 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23005bb-85d7-416b-8668-522a0d5785cb" containerName="collect-profiles" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.110794 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23005bb-85d7-416b-8668-522a0d5785cb" containerName="collect-profiles" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.111172 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="c23005bb-85d7-416b-8668-522a0d5785cb" containerName="collect-profiles" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.112828 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.135512 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.135561 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.135596 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.136032 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-config\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.136109 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.136140 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.136185 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbbkr\" (UniqueName: \"kubernetes.io/projected/cd461b1e-89bc-4eb8-8884-bf6031e2784d-kube-api-access-wbbkr\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.136404 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-lzmj5"] Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.238212 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.238289 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.238359 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.238434 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-config\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.238526 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.238566 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.238631 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbbkr\" (UniqueName: \"kubernetes.io/projected/cd461b1e-89bc-4eb8-8884-bf6031e2784d-kube-api-access-wbbkr\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.239707 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.240463 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-config\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.241232 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.241499 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.241710 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.242363 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd461b1e-89bc-4eb8-8884-bf6031e2784d-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.280098 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbbkr\" (UniqueName: \"kubernetes.io/projected/cd461b1e-89bc-4eb8-8884-bf6031e2784d-kube-api-access-wbbkr\") pod \"dnsmasq-dns-5d75f767dc-lzmj5\" (UID: \"cd461b1e-89bc-4eb8-8884-bf6031e2784d\") " pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.435972 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.594742 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.634457 4803 generic.go:334] "Generic (PLEG): container finished" podID="29833af4-166d-4666-a071-f3f7e0d4ac91" containerID="0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e" exitCode=0 Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.634496 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" event={"ID":"29833af4-166d-4666-a071-f3f7e0d4ac91","Type":"ContainerDied","Data":"0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e"} Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.634519 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" event={"ID":"29833af4-166d-4666-a071-f3f7e0d4ac91","Type":"ContainerDied","Data":"ab3056df4df99f6bd64273a88c4e52fbb88bc5634fe25e5e6d9d833ed30fdaf9"} Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.634523 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-cbgct" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.634536 4803 scope.go:117] "RemoveContainer" containerID="0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.658471 4803 scope.go:117] "RemoveContainer" containerID="72a17f8b5b7d75ac535f934e01ea45626ab430a1019bdc78df05658d48cc9891" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.697415 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-sb\") pod \"29833af4-166d-4666-a071-f3f7e0d4ac91\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.697730 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-nb\") pod \"29833af4-166d-4666-a071-f3f7e0d4ac91\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.697868 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-swift-storage-0\") pod \"29833af4-166d-4666-a071-f3f7e0d4ac91\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.697907 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frflx\" (UniqueName: \"kubernetes.io/projected/29833af4-166d-4666-a071-f3f7e0d4ac91-kube-api-access-frflx\") pod \"29833af4-166d-4666-a071-f3f7e0d4ac91\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.697959 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-svc\") pod \"29833af4-166d-4666-a071-f3f7e0d4ac91\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.698033 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-config\") pod \"29833af4-166d-4666-a071-f3f7e0d4ac91\" (UID: \"29833af4-166d-4666-a071-f3f7e0d4ac91\") " Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.716106 4803 scope.go:117] "RemoveContainer" containerID="0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e" Jan 27 22:15:22 crc kubenswrapper[4803]: E0127 22:15:22.717562 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e\": container with ID starting with 0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e not found: ID does not exist" containerID="0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.717617 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e"} err="failed to get container status \"0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e\": rpc error: code = NotFound desc = could not find container \"0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e\": container with ID starting with 0fd7f3b63e005b75020ad77ce1dccfe23d168d6b6d637c8943946a7b1ff1012e not found: ID does not exist" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.717650 4803 scope.go:117] "RemoveContainer" containerID="72a17f8b5b7d75ac535f934e01ea45626ab430a1019bdc78df05658d48cc9891" Jan 27 22:15:22 crc kubenswrapper[4803]: E0127 22:15:22.717933 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72a17f8b5b7d75ac535f934e01ea45626ab430a1019bdc78df05658d48cc9891\": container with ID starting with 72a17f8b5b7d75ac535f934e01ea45626ab430a1019bdc78df05658d48cc9891 not found: ID does not exist" containerID="72a17f8b5b7d75ac535f934e01ea45626ab430a1019bdc78df05658d48cc9891" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.717964 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a17f8b5b7d75ac535f934e01ea45626ab430a1019bdc78df05658d48cc9891"} err="failed to get container status \"72a17f8b5b7d75ac535f934e01ea45626ab430a1019bdc78df05658d48cc9891\": rpc error: code = NotFound desc = could not find container \"72a17f8b5b7d75ac535f934e01ea45626ab430a1019bdc78df05658d48cc9891\": container with ID starting with 72a17f8b5b7d75ac535f934e01ea45626ab430a1019bdc78df05658d48cc9891 not found: ID does not exist" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.719343 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29833af4-166d-4666-a071-f3f7e0d4ac91-kube-api-access-frflx" (OuterVolumeSpecName: "kube-api-access-frflx") pod "29833af4-166d-4666-a071-f3f7e0d4ac91" (UID: "29833af4-166d-4666-a071-f3f7e0d4ac91"). InnerVolumeSpecName "kube-api-access-frflx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.785181 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29833af4-166d-4666-a071-f3f7e0d4ac91" (UID: "29833af4-166d-4666-a071-f3f7e0d4ac91"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.800097 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "29833af4-166d-4666-a071-f3f7e0d4ac91" (UID: "29833af4-166d-4666-a071-f3f7e0d4ac91"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.801543 4803 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.801569 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frflx\" (UniqueName: \"kubernetes.io/projected/29833af4-166d-4666-a071-f3f7e0d4ac91-kube-api-access-frflx\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.801582 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.808328 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29833af4-166d-4666-a071-f3f7e0d4ac91" (UID: "29833af4-166d-4666-a071-f3f7e0d4ac91"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.810306 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29833af4-166d-4666-a071-f3f7e0d4ac91" (UID: "29833af4-166d-4666-a071-f3f7e0d4ac91"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.832176 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-config" (OuterVolumeSpecName: "config") pod "29833af4-166d-4666-a071-f3f7e0d4ac91" (UID: "29833af4-166d-4666-a071-f3f7e0d4ac91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.903761 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.903813 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.903826 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29833af4-166d-4666-a071-f3f7e0d4ac91-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.971322 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-cbgct"] Jan 27 22:15:22 crc kubenswrapper[4803]: I0127 22:15:22.987060 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-cbgct"] Jan 27 22:15:23 crc kubenswrapper[4803]: I0127 22:15:23.023299 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-lzmj5"] Jan 27 22:15:23 crc kubenswrapper[4803]: I0127 22:15:23.651174 4803 generic.go:334] "Generic (PLEG): container finished" podID="cd461b1e-89bc-4eb8-8884-bf6031e2784d" containerID="296a908c0248294a03de5b7fe0e6b714960ab15f09515bd62ced93f313b41638" exitCode=0 Jan 27 22:15:23 crc kubenswrapper[4803]: I0127 22:15:23.651273 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" event={"ID":"cd461b1e-89bc-4eb8-8884-bf6031e2784d","Type":"ContainerDied","Data":"296a908c0248294a03de5b7fe0e6b714960ab15f09515bd62ced93f313b41638"} Jan 27 22:15:23 crc kubenswrapper[4803]: I0127 22:15:23.651658 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" event={"ID":"cd461b1e-89bc-4eb8-8884-bf6031e2784d","Type":"ContainerStarted","Data":"0775beb66940879d395c2d79d878a33d73f87cd5487c3487f542351f9db370f5"} Jan 27 22:15:24 crc kubenswrapper[4803]: I0127 22:15:24.306658 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:15:24 crc kubenswrapper[4803]: E0127 22:15:24.307047 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:15:24 crc kubenswrapper[4803]: I0127 22:15:24.321955 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29833af4-166d-4666-a071-f3f7e0d4ac91" path="/var/lib/kubelet/pods/29833af4-166d-4666-a071-f3f7e0d4ac91/volumes" Jan 27 22:15:24 crc kubenswrapper[4803]: I0127 22:15:24.668161 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" event={"ID":"cd461b1e-89bc-4eb8-8884-bf6031e2784d","Type":"ContainerStarted","Data":"06dd29852dfcd3f69c75b82055ba8eb7eaea661d58bbd0d94605865938d4b3f5"} Jan 27 22:15:24 crc kubenswrapper[4803]: I0127 22:15:24.668307 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:24 crc kubenswrapper[4803]: I0127 22:15:24.671471 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sjbk6" event={"ID":"dfdeec7e-e323-4a7a-9a5c-badcec773861","Type":"ContainerStarted","Data":"4e8904f633efb98534f8bc13cebf0c884236f320cec56539f3081c3775e0f2e6"} Jan 27 22:15:24 crc kubenswrapper[4803]: I0127 22:15:24.691438 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" podStartSLOduration=2.691423335 podStartE2EDuration="2.691423335s" podCreationTimestamp="2026-01-27 22:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:15:24.687490609 +0000 UTC m=+1677.103512308" watchObservedRunningTime="2026-01-27 22:15:24.691423335 +0000 UTC m=+1677.107445034" Jan 27 22:15:24 crc kubenswrapper[4803]: I0127 22:15:24.703045 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-sjbk6" podStartSLOduration=3.05487208 podStartE2EDuration="37.703029727s" podCreationTimestamp="2026-01-27 22:14:47 +0000 UTC" firstStartedPulling="2026-01-27 22:14:48.84772007 +0000 UTC m=+1641.263741769" lastFinishedPulling="2026-01-27 22:15:23.495877717 +0000 UTC m=+1675.911899416" observedRunningTime="2026-01-27 22:15:24.701007882 +0000 UTC m=+1677.117029581" watchObservedRunningTime="2026-01-27 22:15:24.703029727 +0000 UTC m=+1677.119051426" Jan 27 22:15:26 crc kubenswrapper[4803]: I0127 22:15:26.524318 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 22:15:26 crc kubenswrapper[4803]: I0127 22:15:26.692538 4803 generic.go:334] "Generic (PLEG): container finished" podID="dfdeec7e-e323-4a7a-9a5c-badcec773861" containerID="4e8904f633efb98534f8bc13cebf0c884236f320cec56539f3081c3775e0f2e6" exitCode=0 Jan 27 22:15:26 crc kubenswrapper[4803]: I0127 22:15:26.692581 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sjbk6" event={"ID":"dfdeec7e-e323-4a7a-9a5c-badcec773861","Type":"ContainerDied","Data":"4e8904f633efb98534f8bc13cebf0c884236f320cec56539f3081c3775e0f2e6"} Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.275630 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sjbk6" Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.425381 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4bc8\" (UniqueName: \"kubernetes.io/projected/dfdeec7e-e323-4a7a-9a5c-badcec773861-kube-api-access-j4bc8\") pod \"dfdeec7e-e323-4a7a-9a5c-badcec773861\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.425629 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-combined-ca-bundle\") pod \"dfdeec7e-e323-4a7a-9a5c-badcec773861\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.425803 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-config-data\") pod \"dfdeec7e-e323-4a7a-9a5c-badcec773861\" (UID: \"dfdeec7e-e323-4a7a-9a5c-badcec773861\") " Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.434634 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfdeec7e-e323-4a7a-9a5c-badcec773861-kube-api-access-j4bc8" (OuterVolumeSpecName: "kube-api-access-j4bc8") pod "dfdeec7e-e323-4a7a-9a5c-badcec773861" (UID: "dfdeec7e-e323-4a7a-9a5c-badcec773861"). InnerVolumeSpecName "kube-api-access-j4bc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.470606 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfdeec7e-e323-4a7a-9a5c-badcec773861" (UID: "dfdeec7e-e323-4a7a-9a5c-badcec773861"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.528447 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.528475 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4bc8\" (UniqueName: \"kubernetes.io/projected/dfdeec7e-e323-4a7a-9a5c-badcec773861-kube-api-access-j4bc8\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.531172 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-config-data" (OuterVolumeSpecName: "config-data") pod "dfdeec7e-e323-4a7a-9a5c-badcec773861" (UID: "dfdeec7e-e323-4a7a-9a5c-badcec773861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.631020 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfdeec7e-e323-4a7a-9a5c-badcec773861-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.717955 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sjbk6" event={"ID":"dfdeec7e-e323-4a7a-9a5c-badcec773861","Type":"ContainerDied","Data":"5dccb3d3f759be14e4491e8cba0185f9084561408a867c67741d4fdf615fa415"} Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.718287 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dccb3d3f759be14e4491e8cba0185f9084561408a867c67741d4fdf615fa415" Jan 27 22:15:28 crc kubenswrapper[4803]: I0127 22:15:28.718348 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sjbk6" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.611752 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5f485b9957-lsqx4"] Jan 27 22:15:29 crc kubenswrapper[4803]: E0127 22:15:29.612372 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29833af4-166d-4666-a071-f3f7e0d4ac91" containerName="init" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.612392 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="29833af4-166d-4666-a071-f3f7e0d4ac91" containerName="init" Jan 27 22:15:29 crc kubenswrapper[4803]: E0127 22:15:29.612418 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29833af4-166d-4666-a071-f3f7e0d4ac91" containerName="dnsmasq-dns" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.612427 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="29833af4-166d-4666-a071-f3f7e0d4ac91" containerName="dnsmasq-dns" Jan 27 22:15:29 crc kubenswrapper[4803]: E0127 22:15:29.612458 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfdeec7e-e323-4a7a-9a5c-badcec773861" containerName="heat-db-sync" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.612469 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfdeec7e-e323-4a7a-9a5c-badcec773861" containerName="heat-db-sync" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.612788 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="29833af4-166d-4666-a071-f3f7e0d4ac91" containerName="dnsmasq-dns" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.612809 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfdeec7e-e323-4a7a-9a5c-badcec773861" containerName="heat-db-sync" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.614106 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.629594 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f485b9957-lsqx4"] Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.640146 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6fc9ffcfc8-pvv2f"] Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.642283 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.662385 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6fc9ffcfc8-pvv2f"] Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.708672 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5b8fd6fc4f-nzv5v"] Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.710270 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.737632 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b8fd6fc4f-nzv5v"] Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.755646 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-public-tls-certs\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.755693 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ec2eb2-113b-451e-afe2-1e23b2cc656d-config-data-custom\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.755741 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvxlb\" (UniqueName: \"kubernetes.io/projected/98ec2eb2-113b-451e-afe2-1e23b2cc656d-kube-api-access-gvxlb\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.755801 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-internal-tls-certs\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.755822 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-combined-ca-bundle\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.755858 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-config-data-custom\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.755927 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-config-data\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.755945 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ec2eb2-113b-451e-afe2-1e23b2cc656d-combined-ca-bundle\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.755979 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ec2eb2-113b-451e-afe2-1e23b2cc656d-config-data\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.756034 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8nz\" (UniqueName: \"kubernetes.io/projected/32fd7e71-3e64-4163-895f-7e73ef8a39af-kube-api-access-8z8nz\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858002 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-config-data-custom\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858427 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-internal-tls-certs\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858457 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-combined-ca-bundle\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858474 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-config-data-custom\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858526 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-config-data\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858555 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-combined-ca-bundle\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858581 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25rx\" (UniqueName: \"kubernetes.io/projected/6c535a08-5927-403e-9587-616393dd2091-kube-api-access-h25rx\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858608 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-config-data\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858628 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ec2eb2-113b-451e-afe2-1e23b2cc656d-combined-ca-bundle\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858666 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ec2eb2-113b-451e-afe2-1e23b2cc656d-config-data\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858705 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8nz\" (UniqueName: \"kubernetes.io/projected/32fd7e71-3e64-4163-895f-7e73ef8a39af-kube-api-access-8z8nz\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858725 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-internal-tls-certs\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858783 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-public-tls-certs\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858804 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ec2eb2-113b-451e-afe2-1e23b2cc656d-config-data-custom\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858834 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-public-tls-certs\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.858878 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvxlb\" (UniqueName: \"kubernetes.io/projected/98ec2eb2-113b-451e-afe2-1e23b2cc656d-kube-api-access-gvxlb\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.868983 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ec2eb2-113b-451e-afe2-1e23b2cc656d-config-data-custom\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.870963 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-config-data-custom\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.871893 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ec2eb2-113b-451e-afe2-1e23b2cc656d-combined-ca-bundle\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.872697 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-combined-ca-bundle\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.878019 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-public-tls-certs\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.880146 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ec2eb2-113b-451e-afe2-1e23b2cc656d-config-data\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.880232 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-internal-tls-certs\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.880405 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvxlb\" (UniqueName: \"kubernetes.io/projected/98ec2eb2-113b-451e-afe2-1e23b2cc656d-kube-api-access-gvxlb\") pod \"heat-engine-5f485b9957-lsqx4\" (UID: \"98ec2eb2-113b-451e-afe2-1e23b2cc656d\") " pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.882951 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8nz\" (UniqueName: \"kubernetes.io/projected/32fd7e71-3e64-4163-895f-7e73ef8a39af-kube-api-access-8z8nz\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.884203 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fd7e71-3e64-4163-895f-7e73ef8a39af-config-data\") pod \"heat-api-6fc9ffcfc8-pvv2f\" (UID: \"32fd7e71-3e64-4163-895f-7e73ef8a39af\") " pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.944358 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.961141 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-public-tls-certs\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.961264 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-config-data-custom\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.961372 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-config-data\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.961408 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-combined-ca-bundle\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.961437 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25rx\" (UniqueName: \"kubernetes.io/projected/6c535a08-5927-403e-9587-616393dd2091-kube-api-access-h25rx\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.961532 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-internal-tls-certs\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.966691 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-config-data-custom\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.968001 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-public-tls-certs\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.968459 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-internal-tls-certs\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.968727 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-config-data\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.969186 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.980735 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25rx\" (UniqueName: \"kubernetes.io/projected/6c535a08-5927-403e-9587-616393dd2091-kube-api-access-h25rx\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:29 crc kubenswrapper[4803]: I0127 22:15:29.981759 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c535a08-5927-403e-9587-616393dd2091-combined-ca-bundle\") pod \"heat-cfnapi-5b8fd6fc4f-nzv5v\" (UID: \"6c535a08-5927-403e-9587-616393dd2091\") " pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:30 crc kubenswrapper[4803]: I0127 22:15:30.034598 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:30 crc kubenswrapper[4803]: W0127 22:15:30.541416 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98ec2eb2_113b_451e_afe2_1e23b2cc656d.slice/crio-70f3fce40b03c21bcc8ef38df424bc058c82f9de9211859d2000b88135d63fbc WatchSource:0}: Error finding container 70f3fce40b03c21bcc8ef38df424bc058c82f9de9211859d2000b88135d63fbc: Status 404 returned error can't find the container with id 70f3fce40b03c21bcc8ef38df424bc058c82f9de9211859d2000b88135d63fbc Jan 27 22:15:30 crc kubenswrapper[4803]: I0127 22:15:30.545643 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f485b9957-lsqx4"] Jan 27 22:15:30 crc kubenswrapper[4803]: I0127 22:15:30.570051 4803 scope.go:117] "RemoveContainer" containerID="0b2c830dc721a2edad3fd418354a9a2e73aa5da7b6de027ce46a3e2b2064fa6b" Jan 27 22:15:30 crc kubenswrapper[4803]: I0127 22:15:30.661713 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6fc9ffcfc8-pvv2f"] Jan 27 22:15:30 crc kubenswrapper[4803]: I0127 22:15:30.687898 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b8fd6fc4f-nzv5v"] Jan 27 22:15:30 crc kubenswrapper[4803]: W0127 22:15:30.711587 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c535a08_5927_403e_9587_616393dd2091.slice/crio-b90e96e35dfe229bc8641da46c6845cbedab2e928d02ffb0a5981be25917b51a WatchSource:0}: Error finding container b90e96e35dfe229bc8641da46c6845cbedab2e928d02ffb0a5981be25917b51a: Status 404 returned error can't find the container with id b90e96e35dfe229bc8641da46c6845cbedab2e928d02ffb0a5981be25917b51a Jan 27 22:15:30 crc kubenswrapper[4803]: I0127 22:15:30.749002 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f485b9957-lsqx4" event={"ID":"98ec2eb2-113b-451e-afe2-1e23b2cc656d","Type":"ContainerStarted","Data":"70f3fce40b03c21bcc8ef38df424bc058c82f9de9211859d2000b88135d63fbc"} Jan 27 22:15:30 crc kubenswrapper[4803]: I0127 22:15:30.754248 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6fc9ffcfc8-pvv2f" event={"ID":"32fd7e71-3e64-4163-895f-7e73ef8a39af","Type":"ContainerStarted","Data":"83148817036f48c0fd715be966a3c612a5b92bb70e413e9671fabc53afa6da04"} Jan 27 22:15:30 crc kubenswrapper[4803]: I0127 22:15:30.756295 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" event={"ID":"6c535a08-5927-403e-9587-616393dd2091","Type":"ContainerStarted","Data":"b90e96e35dfe229bc8641da46c6845cbedab2e928d02ffb0a5981be25917b51a"} Jan 27 22:15:31 crc kubenswrapper[4803]: I0127 22:15:31.774614 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f485b9957-lsqx4" event={"ID":"98ec2eb2-113b-451e-afe2-1e23b2cc656d","Type":"ContainerStarted","Data":"a77d5c1051c35d24a12bf515e773cac0129ac29124b806e61dd0bc97dea47161"} Jan 27 22:15:31 crc kubenswrapper[4803]: I0127 22:15:31.774961 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:31 crc kubenswrapper[4803]: I0127 22:15:31.796412 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5f485b9957-lsqx4" podStartSLOduration=2.796393526 podStartE2EDuration="2.796393526s" podCreationTimestamp="2026-01-27 22:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:15:31.793227461 +0000 UTC m=+1684.209249160" watchObservedRunningTime="2026-01-27 22:15:31.796393526 +0000 UTC m=+1684.212415225" Jan 27 22:15:32 crc kubenswrapper[4803]: I0127 22:15:32.438237 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-lzmj5" Jan 27 22:15:32 crc kubenswrapper[4803]: I0127 22:15:32.582803 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-kpp6w"] Jan 27 22:15:32 crc kubenswrapper[4803]: I0127 22:15:32.587248 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" podUID="c7f762c6-29a3-4eb1-b92a-db23c0692772" containerName="dnsmasq-dns" containerID="cri-o://3c4f7642cc225775c9dce241ccf8d153e1811139e526fff76fad6f9225bce4b1" gracePeriod=10 Jan 27 22:15:32 crc kubenswrapper[4803]: I0127 22:15:32.793949 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6fc9ffcfc8-pvv2f" event={"ID":"32fd7e71-3e64-4163-895f-7e73ef8a39af","Type":"ContainerStarted","Data":"144eab18dfbdef3f3206ed1f687bb31c4dc63d7a826054ce74bd1c3ef322037f"} Jan 27 22:15:32 crc kubenswrapper[4803]: I0127 22:15:32.795633 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:32 crc kubenswrapper[4803]: I0127 22:15:32.799406 4803 generic.go:334] "Generic (PLEG): container finished" podID="c7f762c6-29a3-4eb1-b92a-db23c0692772" containerID="3c4f7642cc225775c9dce241ccf8d153e1811139e526fff76fad6f9225bce4b1" exitCode=0 Jan 27 22:15:32 crc kubenswrapper[4803]: I0127 22:15:32.799637 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" event={"ID":"c7f762c6-29a3-4eb1-b92a-db23c0692772","Type":"ContainerDied","Data":"3c4f7642cc225775c9dce241ccf8d153e1811139e526fff76fad6f9225bce4b1"} Jan 27 22:15:32 crc kubenswrapper[4803]: I0127 22:15:32.817659 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6fc9ffcfc8-pvv2f" podStartSLOduration=2.21717614 podStartE2EDuration="3.817640134s" podCreationTimestamp="2026-01-27 22:15:29 +0000 UTC" firstStartedPulling="2026-01-27 22:15:30.705540674 +0000 UTC m=+1683.121562393" lastFinishedPulling="2026-01-27 22:15:32.306004688 +0000 UTC m=+1684.722026387" observedRunningTime="2026-01-27 22:15:32.816684469 +0000 UTC m=+1685.232706168" watchObservedRunningTime="2026-01-27 22:15:32.817640134 +0000 UTC m=+1685.233661833" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.159258 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.179655 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kpc5\" (UniqueName: \"kubernetes.io/projected/c7f762c6-29a3-4eb1-b92a-db23c0692772-kube-api-access-7kpc5\") pod \"c7f762c6-29a3-4eb1-b92a-db23c0692772\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.179693 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-config\") pod \"c7f762c6-29a3-4eb1-b92a-db23c0692772\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.179803 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-sb\") pod \"c7f762c6-29a3-4eb1-b92a-db23c0692772\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.179926 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-openstack-edpm-ipam\") pod \"c7f762c6-29a3-4eb1-b92a-db23c0692772\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.179949 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-nb\") pod \"c7f762c6-29a3-4eb1-b92a-db23c0692772\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.179983 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-svc\") pod \"c7f762c6-29a3-4eb1-b92a-db23c0692772\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.180005 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-swift-storage-0\") pod \"c7f762c6-29a3-4eb1-b92a-db23c0692772\" (UID: \"c7f762c6-29a3-4eb1-b92a-db23c0692772\") " Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.214174 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f762c6-29a3-4eb1-b92a-db23c0692772-kube-api-access-7kpc5" (OuterVolumeSpecName: "kube-api-access-7kpc5") pod "c7f762c6-29a3-4eb1-b92a-db23c0692772" (UID: "c7f762c6-29a3-4eb1-b92a-db23c0692772"). InnerVolumeSpecName "kube-api-access-7kpc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.284277 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kpc5\" (UniqueName: \"kubernetes.io/projected/c7f762c6-29a3-4eb1-b92a-db23c0692772-kube-api-access-7kpc5\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.290459 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c7f762c6-29a3-4eb1-b92a-db23c0692772" (UID: "c7f762c6-29a3-4eb1-b92a-db23c0692772"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.296157 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7f762c6-29a3-4eb1-b92a-db23c0692772" (UID: "c7f762c6-29a3-4eb1-b92a-db23c0692772"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.306350 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c7f762c6-29a3-4eb1-b92a-db23c0692772" (UID: "c7f762c6-29a3-4eb1-b92a-db23c0692772"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.307429 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7f762c6-29a3-4eb1-b92a-db23c0692772" (UID: "c7f762c6-29a3-4eb1-b92a-db23c0692772"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.309316 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7f762c6-29a3-4eb1-b92a-db23c0692772" (UID: "c7f762c6-29a3-4eb1-b92a-db23c0692772"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.331999 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-config" (OuterVolumeSpecName: "config") pod "c7f762c6-29a3-4eb1-b92a-db23c0692772" (UID: "c7f762c6-29a3-4eb1-b92a-db23c0692772"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.387495 4803 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.387539 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.387558 4803 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.387570 4803 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.387582 4803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-config\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.387593 4803 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7f762c6-29a3-4eb1-b92a-db23c0692772-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.812238 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.812238 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-kpp6w" event={"ID":"c7f762c6-29a3-4eb1-b92a-db23c0692772","Type":"ContainerDied","Data":"89413115fb89c2ef00db7731302013770dc603ca3452683d81612fe89ea57b24"} Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.812369 4803 scope.go:117] "RemoveContainer" containerID="3c4f7642cc225775c9dce241ccf8d153e1811139e526fff76fad6f9225bce4b1" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.815482 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" event={"ID":"6c535a08-5927-403e-9587-616393dd2091","Type":"ContainerStarted","Data":"d9744d20abe3052d8d955575a7c868a427f4a37b9fd2cf3365e97ca483445502"} Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.815820 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.836513 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" podStartSLOduration=3.241749629 podStartE2EDuration="4.836491688s" podCreationTimestamp="2026-01-27 22:15:29 +0000 UTC" firstStartedPulling="2026-01-27 22:15:30.729942101 +0000 UTC m=+1683.145963800" lastFinishedPulling="2026-01-27 22:15:32.32468416 +0000 UTC m=+1684.740705859" observedRunningTime="2026-01-27 22:15:33.832026778 +0000 UTC m=+1686.248048497" watchObservedRunningTime="2026-01-27 22:15:33.836491688 +0000 UTC m=+1686.252513387" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.849133 4803 scope.go:117] "RemoveContainer" containerID="a691cd7d8e417ef0da80b7e85fd607297a1f9901df980e0595cc6b63a24ccb03" Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.861045 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-kpp6w"] Jan 27 22:15:33 crc kubenswrapper[4803]: I0127 22:15:33.873004 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-kpp6w"] Jan 27 22:15:34 crc kubenswrapper[4803]: I0127 22:15:34.324358 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f762c6-29a3-4eb1-b92a-db23c0692772" path="/var/lib/kubelet/pods/c7f762c6-29a3-4eb1-b92a-db23c0692772/volumes" Jan 27 22:15:35 crc kubenswrapper[4803]: I0127 22:15:35.307186 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:15:35 crc kubenswrapper[4803]: E0127 22:15:35.307450 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:15:41 crc kubenswrapper[4803]: I0127 22:15:41.603444 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5b8fd6fc4f-nzv5v" Jan 27 22:15:41 crc kubenswrapper[4803]: I0127 22:15:41.613998 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6fc9ffcfc8-pvv2f" Jan 27 22:15:41 crc kubenswrapper[4803]: I0127 22:15:41.704929 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c55c9f8f8-s8fzg"] Jan 27 22:15:41 crc kubenswrapper[4803]: I0127 22:15:41.705262 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" podUID="6211e4d6-a2aa-4243-9951-906324729104" containerName="heat-cfnapi" containerID="cri-o://572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59" gracePeriod=60 Jan 27 22:15:41 crc kubenswrapper[4803]: I0127 22:15:41.737671 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6cd7d794d7-nf5gr"] Jan 27 22:15:41 crc kubenswrapper[4803]: I0127 22:15:41.737947 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6cd7d794d7-nf5gr" podUID="a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" containerName="heat-api" containerID="cri-o://f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e" gracePeriod=60 Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.778080 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk"] Jan 27 22:15:42 crc kubenswrapper[4803]: E0127 22:15:42.778879 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f762c6-29a3-4eb1-b92a-db23c0692772" containerName="init" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.778895 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f762c6-29a3-4eb1-b92a-db23c0692772" containerName="init" Jan 27 22:15:42 crc kubenswrapper[4803]: E0127 22:15:42.778913 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f762c6-29a3-4eb1-b92a-db23c0692772" containerName="dnsmasq-dns" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.778918 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f762c6-29a3-4eb1-b92a-db23c0692772" containerName="dnsmasq-dns" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.779195 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f762c6-29a3-4eb1-b92a-db23c0692772" containerName="dnsmasq-dns" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.780077 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.784333 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.784514 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.784605 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.784612 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.833912 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk"] Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.844810 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdqdz\" (UniqueName: \"kubernetes.io/projected/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-kube-api-access-fdqdz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.844894 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.845110 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.845163 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.948067 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdqdz\" (UniqueName: \"kubernetes.io/projected/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-kube-api-access-fdqdz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.948140 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.948215 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.948235 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.959377 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.964788 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.968172 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdqdz\" (UniqueName: \"kubernetes.io/projected/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-kube-api-access-fdqdz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:42 crc kubenswrapper[4803]: I0127 22:15:42.976832 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:43 crc kubenswrapper[4803]: I0127 22:15:43.144287 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:15:44 crc kubenswrapper[4803]: I0127 22:15:44.031200 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk"] Jan 27 22:15:44 crc kubenswrapper[4803]: W0127 22:15:44.036367 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ff38750_3be9_4d41_a4c7_5c2f8abd0880.slice/crio-7a8576b3c7770db6bd26217e74fc8a016b4e3e1fbd3d10653122e3f8cb7bb806 WatchSource:0}: Error finding container 7a8576b3c7770db6bd26217e74fc8a016b4e3e1fbd3d10653122e3f8cb7bb806: Status 404 returned error can't find the container with id 7a8576b3c7770db6bd26217e74fc8a016b4e3e1fbd3d10653122e3f8cb7bb806 Jan 27 22:15:44 crc kubenswrapper[4803]: I0127 22:15:44.939130 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" event={"ID":"8ff38750-3be9-4d41-a4c7-5c2f8abd0880","Type":"ContainerStarted","Data":"7a8576b3c7770db6bd26217e74fc8a016b4e3e1fbd3d10653122e3f8cb7bb806"} Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.604929 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.719717 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data-custom\") pod \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.719803 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-combined-ca-bundle\") pod \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.719881 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2sjs\" (UniqueName: \"kubernetes.io/projected/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-kube-api-access-q2sjs\") pod \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.720738 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-internal-tls-certs\") pod \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.720810 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-public-tls-certs\") pod \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.720884 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data\") pod \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\" (UID: \"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.725835 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" (UID: "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.730098 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-kube-api-access-q2sjs" (OuterVolumeSpecName: "kube-api-access-q2sjs") pod "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" (UID: "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6"). InnerVolumeSpecName "kube-api-access-q2sjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.784762 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.804493 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data" (OuterVolumeSpecName: "config-data") pod "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" (UID: "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.812213 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" (UID: "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.826606 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-public-tls-certs\") pod \"6211e4d6-a2aa-4243-9951-906324729104\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.826683 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data\") pod \"6211e4d6-a2aa-4243-9951-906324729104\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.826728 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjsn6\" (UniqueName: \"kubernetes.io/projected/6211e4d6-a2aa-4243-9951-906324729104-kube-api-access-jjsn6\") pod \"6211e4d6-a2aa-4243-9951-906324729104\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.826854 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-internal-tls-certs\") pod \"6211e4d6-a2aa-4243-9951-906324729104\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.826973 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-combined-ca-bundle\") pod \"6211e4d6-a2aa-4243-9951-906324729104\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.827106 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data-custom\") pod \"6211e4d6-a2aa-4243-9951-906324729104\" (UID: \"6211e4d6-a2aa-4243-9951-906324729104\") " Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.827624 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.827637 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2sjs\" (UniqueName: \"kubernetes.io/projected/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-kube-api-access-q2sjs\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.827649 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.827657 4803 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.836361 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6211e4d6-a2aa-4243-9951-906324729104" (UID: "6211e4d6-a2aa-4243-9951-906324729104"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.836432 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6211e4d6-a2aa-4243-9951-906324729104-kube-api-access-jjsn6" (OuterVolumeSpecName: "kube-api-access-jjsn6") pod "6211e4d6-a2aa-4243-9951-906324729104" (UID: "6211e4d6-a2aa-4243-9951-906324729104"). InnerVolumeSpecName "kube-api-access-jjsn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.847594 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" (UID: "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.876520 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6211e4d6-a2aa-4243-9951-906324729104" (UID: "6211e4d6-a2aa-4243-9951-906324729104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.877441 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" (UID: "a7c7b837-798b-4f6a-b9bd-1d93b279e8d6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.906175 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6211e4d6-a2aa-4243-9951-906324729104" (UID: "6211e4d6-a2aa-4243-9951-906324729104"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.912451 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6211e4d6-a2aa-4243-9951-906324729104" (UID: "6211e4d6-a2aa-4243-9951-906324729104"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.917151 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data" (OuterVolumeSpecName: "config-data") pod "6211e4d6-a2aa-4243-9951-906324729104" (UID: "6211e4d6-a2aa-4243-9951-906324729104"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.929377 4803 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.929472 4803 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.929486 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.929497 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjsn6\" (UniqueName: \"kubernetes.io/projected/6211e4d6-a2aa-4243-9951-906324729104-kube-api-access-jjsn6\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.929507 4803 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.929516 4803 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.929525 4803 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.929533 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6211e4d6-a2aa-4243-9951-906324729104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.954207 4803 generic.go:334] "Generic (PLEG): container finished" podID="6211e4d6-a2aa-4243-9951-906324729104" containerID="572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59" exitCode=0 Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.954266 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.954276 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" event={"ID":"6211e4d6-a2aa-4243-9951-906324729104","Type":"ContainerDied","Data":"572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59"} Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.954306 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c55c9f8f8-s8fzg" event={"ID":"6211e4d6-a2aa-4243-9951-906324729104","Type":"ContainerDied","Data":"e6e838de4990a15f74a99f0fb31cc0200f03813a8ea73bd185f7213d365eed98"} Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.954321 4803 scope.go:117] "RemoveContainer" containerID="572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59" Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.956925 4803 generic.go:334] "Generic (PLEG): container finished" podID="a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" containerID="f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e" exitCode=0 Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.956965 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cd7d794d7-nf5gr" event={"ID":"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6","Type":"ContainerDied","Data":"f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e"} Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.956989 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cd7d794d7-nf5gr" event={"ID":"a7c7b837-798b-4f6a-b9bd-1d93b279e8d6","Type":"ContainerDied","Data":"e30b2d819b4209053226320edae6fb50f39c26950174d441bc2c828571cd633d"} Jan 27 22:15:45 crc kubenswrapper[4803]: I0127 22:15:45.957031 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cd7d794d7-nf5gr" Jan 27 22:15:46 crc kubenswrapper[4803]: I0127 22:15:46.003216 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6cd7d794d7-nf5gr"] Jan 27 22:15:46 crc kubenswrapper[4803]: I0127 22:15:46.016270 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6cd7d794d7-nf5gr"] Jan 27 22:15:46 crc kubenswrapper[4803]: I0127 22:15:46.027703 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c55c9f8f8-s8fzg"] Jan 27 22:15:46 crc kubenswrapper[4803]: I0127 22:15:46.041467 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6c55c9f8f8-s8fzg"] Jan 27 22:15:46 crc kubenswrapper[4803]: I0127 22:15:46.056917 4803 scope.go:117] "RemoveContainer" containerID="572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59" Jan 27 22:15:46 crc kubenswrapper[4803]: E0127 22:15:46.057449 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59\": container with ID starting with 572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59 not found: ID does not exist" containerID="572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59" Jan 27 22:15:46 crc kubenswrapper[4803]: I0127 22:15:46.057477 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59"} err="failed to get container status \"572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59\": rpc error: code = NotFound desc = could not find container \"572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59\": container with ID starting with 572512d9300948274d1275afac99daea8ad78f610168208355f9a8eaed174b59 not found: ID does not exist" Jan 27 22:15:46 crc kubenswrapper[4803]: I0127 22:15:46.057497 4803 scope.go:117] "RemoveContainer" containerID="f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e" Jan 27 22:15:46 crc kubenswrapper[4803]: I0127 22:15:46.328031 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6211e4d6-a2aa-4243-9951-906324729104" path="/var/lib/kubelet/pods/6211e4d6-a2aa-4243-9951-906324729104/volumes" Jan 27 22:15:46 crc kubenswrapper[4803]: I0127 22:15:46.328575 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" path="/var/lib/kubelet/pods/a7c7b837-798b-4f6a-b9bd-1d93b279e8d6/volumes" Jan 27 22:15:47 crc kubenswrapper[4803]: I0127 22:15:47.212166 4803 scope.go:117] "RemoveContainer" containerID="f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e" Jan 27 22:15:47 crc kubenswrapper[4803]: E0127 22:15:47.212870 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e\": container with ID starting with f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e not found: ID does not exist" containerID="f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e" Jan 27 22:15:47 crc kubenswrapper[4803]: I0127 22:15:47.212910 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e"} err="failed to get container status \"f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e\": rpc error: code = NotFound desc = could not find container \"f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e\": container with ID starting with f73e40c2000d9bbf1b885737578732b42a8186e81bcc6bffd028ab74ea008a1e not found: ID does not exist" Jan 27 22:15:47 crc kubenswrapper[4803]: I0127 22:15:47.984583 4803 generic.go:334] "Generic (PLEG): container finished" podID="3998c673-ac46-4c45-a424-a92a7e88853c" containerID="f95e4e02e69b4c4f5bff078c1a8ca39c03c86ddd342dc314ae5650ab39ca8e4f" exitCode=0 Jan 27 22:15:47 crc kubenswrapper[4803]: I0127 22:15:47.984685 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3998c673-ac46-4c45-a424-a92a7e88853c","Type":"ContainerDied","Data":"f95e4e02e69b4c4f5bff078c1a8ca39c03c86ddd342dc314ae5650ab39ca8e4f"} Jan 27 22:15:47 crc kubenswrapper[4803]: I0127 22:15:47.987936 4803 generic.go:334] "Generic (PLEG): container finished" podID="71236ece-7761-4d82-a93c-c5b40c33660b" containerID="e77e19461594a4f676efef7952610e2e21faa837b89fd84d198368a6344ce0de" exitCode=0 Jan 27 22:15:47 crc kubenswrapper[4803]: I0127 22:15:47.988009 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71236ece-7761-4d82-a93c-c5b40c33660b","Type":"ContainerDied","Data":"e77e19461594a4f676efef7952610e2e21faa837b89fd84d198368a6344ce0de"} Jan 27 22:15:49 crc kubenswrapper[4803]: I0127 22:15:49.306894 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:15:49 crc kubenswrapper[4803]: E0127 22:15:49.307476 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:15:50 crc kubenswrapper[4803]: I0127 22:15:49.999982 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5f485b9957-lsqx4" Jan 27 22:15:50 crc kubenswrapper[4803]: I0127 22:15:50.094619 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7cfbfb9f4d-z24kh"] Jan 27 22:15:50 crc kubenswrapper[4803]: I0127 22:15:50.095065 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" podUID="552f794c-b47b-4f78-9f79-d989e7b621d7" containerName="heat-engine" containerID="cri-o://09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" gracePeriod=60 Jan 27 22:15:52 crc kubenswrapper[4803]: E0127 22:15:52.216436 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:15:52 crc kubenswrapper[4803]: E0127 22:15:52.218682 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:15:52 crc kubenswrapper[4803]: E0127 22:15:52.220145 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:15:52 crc kubenswrapper[4803]: E0127 22:15:52.220185 4803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" podUID="552f794c-b47b-4f78-9f79-d989e7b621d7" containerName="heat-engine" Jan 27 22:15:55 crc kubenswrapper[4803]: I0127 22:15:55.088153 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71236ece-7761-4d82-a93c-c5b40c33660b","Type":"ContainerStarted","Data":"ada071f99fb74ade04662ee9bfcbaf8c5c66843e23541a230f23aa7dac875a55"} Jan 27 22:15:55 crc kubenswrapper[4803]: I0127 22:15:55.089271 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:15:55 crc kubenswrapper[4803]: I0127 22:15:55.095731 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" event={"ID":"8ff38750-3be9-4d41-a4c7-5c2f8abd0880","Type":"ContainerStarted","Data":"6be5fd540a0f6f2d4057771f58ee37416308ac8685ecf9e7b65ece7183a11103"} Jan 27 22:15:55 crc kubenswrapper[4803]: I0127 22:15:55.103117 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3998c673-ac46-4c45-a424-a92a7e88853c","Type":"ContainerStarted","Data":"9a866d3d9b39ee90448c9a2d746f1e4aeee68610a2f50d4c03406a548533a792"} Jan 27 22:15:55 crc kubenswrapper[4803]: I0127 22:15:55.104226 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 27 22:15:55 crc kubenswrapper[4803]: I0127 22:15:55.129043 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.12901572 podStartE2EDuration="43.12901572s" podCreationTimestamp="2026-01-27 22:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:15:55.115233399 +0000 UTC m=+1707.531255108" watchObservedRunningTime="2026-01-27 22:15:55.12901572 +0000 UTC m=+1707.545037419" Jan 27 22:15:55 crc kubenswrapper[4803]: I0127 22:15:55.153720 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=45.153692354 podStartE2EDuration="45.153692354s" podCreationTimestamp="2026-01-27 22:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:15:55.144570988 +0000 UTC m=+1707.560592697" watchObservedRunningTime="2026-01-27 22:15:55.153692354 +0000 UTC m=+1707.569714053" Jan 27 22:15:55 crc kubenswrapper[4803]: I0127 22:15:55.187490 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" podStartSLOduration=2.640180471 podStartE2EDuration="13.187475383s" podCreationTimestamp="2026-01-27 22:15:42 +0000 UTC" firstStartedPulling="2026-01-27 22:15:44.038305706 +0000 UTC m=+1696.454327405" lastFinishedPulling="2026-01-27 22:15:54.585600618 +0000 UTC m=+1707.001622317" observedRunningTime="2026-01-27 22:15:55.163552469 +0000 UTC m=+1707.579574168" watchObservedRunningTime="2026-01-27 22:15:55.187475383 +0000 UTC m=+1707.603497082" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.161523 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-q7tp4"] Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.177137 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-q7tp4"] Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.247035 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-hrwh7"] Jan 27 22:15:58 crc kubenswrapper[4803]: E0127 22:15:58.247769 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6211e4d6-a2aa-4243-9951-906324729104" containerName="heat-cfnapi" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.247786 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6211e4d6-a2aa-4243-9951-906324729104" containerName="heat-cfnapi" Jan 27 22:15:58 crc kubenswrapper[4803]: E0127 22:15:58.247802 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" containerName="heat-api" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.247808 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" containerName="heat-api" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.248079 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="6211e4d6-a2aa-4243-9951-906324729104" containerName="heat-cfnapi" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.248103 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c7b837-798b-4f6a-b9bd-1d93b279e8d6" containerName="heat-api" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.248941 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.251003 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.299551 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-hrwh7"] Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.357500 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-scripts\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.357632 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-combined-ca-bundle\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.357720 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdc52\" (UniqueName: \"kubernetes.io/projected/6886b51d-5eac-48bf-9a10-98a0b8a8d051-kube-api-access-tdc52\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.357814 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-config-data\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.360909 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489ecf39-a12d-47b3-8f74-eb20ea68f519" path="/var/lib/kubelet/pods/489ecf39-a12d-47b3-8f74-eb20ea68f519/volumes" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.459923 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdc52\" (UniqueName: \"kubernetes.io/projected/6886b51d-5eac-48bf-9a10-98a0b8a8d051-kube-api-access-tdc52\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.460060 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-config-data\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.460121 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-scripts\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.460224 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-combined-ca-bundle\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.465747 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-scripts\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.465898 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-combined-ca-bundle\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.466568 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-config-data\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.488075 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdc52\" (UniqueName: \"kubernetes.io/projected/6886b51d-5eac-48bf-9a10-98a0b8a8d051-kube-api-access-tdc52\") pod \"aodh-db-sync-hrwh7\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:58 crc kubenswrapper[4803]: I0127 22:15:58.616672 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:15:59 crc kubenswrapper[4803]: I0127 22:15:59.107565 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-hrwh7"] Jan 27 22:15:59 crc kubenswrapper[4803]: I0127 22:15:59.143778 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hrwh7" event={"ID":"6886b51d-5eac-48bf-9a10-98a0b8a8d051","Type":"ContainerStarted","Data":"3aac9272acd21e3951c2cbbe65210257ba34fafc07ed725fe7acb94201330604"} Jan 27 22:16:01 crc kubenswrapper[4803]: I0127 22:16:01.308585 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:16:01 crc kubenswrapper[4803]: E0127 22:16:01.309443 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:16:02 crc kubenswrapper[4803]: E0127 22:16:02.217600 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:16:02 crc kubenswrapper[4803]: E0127 22:16:02.219628 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:16:02 crc kubenswrapper[4803]: E0127 22:16:02.221629 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:16:02 crc kubenswrapper[4803]: E0127 22:16:02.221675 4803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" podUID="552f794c-b47b-4f78-9f79-d989e7b621d7" containerName="heat-engine" Jan 27 22:16:04 crc kubenswrapper[4803]: I0127 22:16:04.209330 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hrwh7" event={"ID":"6886b51d-5eac-48bf-9a10-98a0b8a8d051","Type":"ContainerStarted","Data":"e351a744d7fe6d1ae3aec5a7563af071f38139a7507eff32e5a87bd498e58ef4"} Jan 27 22:16:07 crc kubenswrapper[4803]: I0127 22:16:07.245694 4803 generic.go:334] "Generic (PLEG): container finished" podID="6886b51d-5eac-48bf-9a10-98a0b8a8d051" containerID="e351a744d7fe6d1ae3aec5a7563af071f38139a7507eff32e5a87bd498e58ef4" exitCode=0 Jan 27 22:16:07 crc kubenswrapper[4803]: I0127 22:16:07.245819 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hrwh7" event={"ID":"6886b51d-5eac-48bf-9a10-98a0b8a8d051","Type":"ContainerDied","Data":"e351a744d7fe6d1ae3aec5a7563af071f38139a7507eff32e5a87bd498e58ef4"} Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.712148 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.823436 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-combined-ca-bundle\") pod \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.823596 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-scripts\") pod \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.823712 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-config-data\") pod \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.823754 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdc52\" (UniqueName: \"kubernetes.io/projected/6886b51d-5eac-48bf-9a10-98a0b8a8d051-kube-api-access-tdc52\") pod \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\" (UID: \"6886b51d-5eac-48bf-9a10-98a0b8a8d051\") " Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.829152 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6886b51d-5eac-48bf-9a10-98a0b8a8d051-kube-api-access-tdc52" (OuterVolumeSpecName: "kube-api-access-tdc52") pod "6886b51d-5eac-48bf-9a10-98a0b8a8d051" (UID: "6886b51d-5eac-48bf-9a10-98a0b8a8d051"). InnerVolumeSpecName "kube-api-access-tdc52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.838476 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-scripts" (OuterVolumeSpecName: "scripts") pod "6886b51d-5eac-48bf-9a10-98a0b8a8d051" (UID: "6886b51d-5eac-48bf-9a10-98a0b8a8d051"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.855772 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-config-data" (OuterVolumeSpecName: "config-data") pod "6886b51d-5eac-48bf-9a10-98a0b8a8d051" (UID: "6886b51d-5eac-48bf-9a10-98a0b8a8d051"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.856446 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6886b51d-5eac-48bf-9a10-98a0b8a8d051" (UID: "6886b51d-5eac-48bf-9a10-98a0b8a8d051"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.927341 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdc52\" (UniqueName: \"kubernetes.io/projected/6886b51d-5eac-48bf-9a10-98a0b8a8d051-kube-api-access-tdc52\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.927388 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.927400 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:08 crc kubenswrapper[4803]: I0127 22:16:08.927409 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6886b51d-5eac-48bf-9a10-98a0b8a8d051-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:09 crc kubenswrapper[4803]: I0127 22:16:09.278931 4803 generic.go:334] "Generic (PLEG): container finished" podID="8ff38750-3be9-4d41-a4c7-5c2f8abd0880" containerID="6be5fd540a0f6f2d4057771f58ee37416308ac8685ecf9e7b65ece7183a11103" exitCode=0 Jan 27 22:16:09 crc kubenswrapper[4803]: I0127 22:16:09.278979 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" event={"ID":"8ff38750-3be9-4d41-a4c7-5c2f8abd0880","Type":"ContainerDied","Data":"6be5fd540a0f6f2d4057771f58ee37416308ac8685ecf9e7b65ece7183a11103"} Jan 27 22:16:09 crc kubenswrapper[4803]: I0127 22:16:09.280730 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hrwh7" event={"ID":"6886b51d-5eac-48bf-9a10-98a0b8a8d051","Type":"ContainerDied","Data":"3aac9272acd21e3951c2cbbe65210257ba34fafc07ed725fe7acb94201330604"} Jan 27 22:16:09 crc kubenswrapper[4803]: I0127 22:16:09.280761 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aac9272acd21e3951c2cbbe65210257ba34fafc07ed725fe7acb94201330604" Jan 27 22:16:09 crc kubenswrapper[4803]: I0127 22:16:09.280930 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hrwh7" Jan 27 22:16:10 crc kubenswrapper[4803]: I0127 22:16:10.782771 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 27 22:16:10 crc kubenswrapper[4803]: I0127 22:16:10.861832 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 22:16:10 crc kubenswrapper[4803]: I0127 22:16:10.902138 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.046085 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.046357 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-api" containerID="cri-o://48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58" gracePeriod=30 Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.046822 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-listener" containerID="cri-o://16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de" gracePeriod=30 Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.047825 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-evaluator" containerID="cri-o://9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2" gracePeriod=30 Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.047898 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-notifier" containerID="cri-o://3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487" gracePeriod=30 Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.081977 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdqdz\" (UniqueName: \"kubernetes.io/projected/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-kube-api-access-fdqdz\") pod \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.082124 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-ssh-key-openstack-edpm-ipam\") pod \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.082164 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-inventory\") pod \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.082251 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-repo-setup-combined-ca-bundle\") pod \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.104679 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-kube-api-access-fdqdz" (OuterVolumeSpecName: "kube-api-access-fdqdz") pod "8ff38750-3be9-4d41-a4c7-5c2f8abd0880" (UID: "8ff38750-3be9-4d41-a4c7-5c2f8abd0880"). InnerVolumeSpecName "kube-api-access-fdqdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.118893 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8ff38750-3be9-4d41-a4c7-5c2f8abd0880" (UID: "8ff38750-3be9-4d41-a4c7-5c2f8abd0880"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:11 crc kubenswrapper[4803]: E0127 22:16:11.148314 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-inventory podName:8ff38750-3be9-4d41-a4c7-5c2f8abd0880 nodeName:}" failed. No retries permitted until 2026-01-27 22:16:11.648289456 +0000 UTC m=+1724.064311155 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-inventory") pod "8ff38750-3be9-4d41-a4c7-5c2f8abd0880" (UID: "8ff38750-3be9-4d41-a4c7-5c2f8abd0880") : error deleting /var/lib/kubelet/pods/8ff38750-3be9-4d41-a4c7-5c2f8abd0880/volume-subpaths: remove /var/lib/kubelet/pods/8ff38750-3be9-4d41-a4c7-5c2f8abd0880/volume-subpaths: no such file or directory Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.152944 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8ff38750-3be9-4d41-a4c7-5c2f8abd0880" (UID: "8ff38750-3be9-4d41-a4c7-5c2f8abd0880"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.184804 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdqdz\" (UniqueName: \"kubernetes.io/projected/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-kube-api-access-fdqdz\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.184833 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.184866 4803 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.320366 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" event={"ID":"8ff38750-3be9-4d41-a4c7-5c2f8abd0880","Type":"ContainerDied","Data":"7a8576b3c7770db6bd26217e74fc8a016b4e3e1fbd3d10653122e3f8cb7bb806"} Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.320747 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8576b3c7770db6bd26217e74fc8a016b4e3e1fbd3d10653122e3f8cb7bb806" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.320534 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.386078 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9"] Jan 27 22:16:11 crc kubenswrapper[4803]: E0127 22:16:11.386679 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff38750-3be9-4d41-a4c7-5c2f8abd0880" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.386705 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff38750-3be9-4d41-a4c7-5c2f8abd0880" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 22:16:11 crc kubenswrapper[4803]: E0127 22:16:11.386752 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6886b51d-5eac-48bf-9a10-98a0b8a8d051" containerName="aodh-db-sync" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.386762 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6886b51d-5eac-48bf-9a10-98a0b8a8d051" containerName="aodh-db-sync" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.387072 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff38750-3be9-4d41-a4c7-5c2f8abd0880" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.387108 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="6886b51d-5eac-48bf-9a10-98a0b8a8d051" containerName="aodh-db-sync" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.388279 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.414863 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9"] Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.491011 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbcb\" (UniqueName: \"kubernetes.io/projected/989f334d-f101-4247-9465-d4bf4c4732b8-kube-api-access-pjbcb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-j26c9\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.491077 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-j26c9\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.491491 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-j26c9\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.594134 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-j26c9\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.594231 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbcb\" (UniqueName: \"kubernetes.io/projected/989f334d-f101-4247-9465-d4bf4c4732b8-kube-api-access-pjbcb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-j26c9\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.594273 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-j26c9\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.602682 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-j26c9\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.602732 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-j26c9\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.611581 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbcb\" (UniqueName: \"kubernetes.io/projected/989f334d-f101-4247-9465-d4bf4c4732b8-kube-api-access-pjbcb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-j26c9\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.696865 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-inventory\") pod \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\" (UID: \"8ff38750-3be9-4d41-a4c7-5c2f8abd0880\") " Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.700382 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-inventory" (OuterVolumeSpecName: "inventory") pod "8ff38750-3be9-4d41-a4c7-5c2f8abd0880" (UID: "8ff38750-3be9-4d41-a4c7-5c2f8abd0880"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.741920 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:11 crc kubenswrapper[4803]: I0127 22:16:11.799793 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ff38750-3be9-4d41-a4c7-5c2f8abd0880-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:12 crc kubenswrapper[4803]: E0127 22:16:12.231111 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:16:12 crc kubenswrapper[4803]: E0127 22:16:12.233724 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:16:12 crc kubenswrapper[4803]: E0127 22:16:12.235093 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 22:16:12 crc kubenswrapper[4803]: E0127 22:16:12.235165 4803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" podUID="552f794c-b47b-4f78-9f79-d989e7b621d7" containerName="heat-engine" Jan 27 22:16:12 crc kubenswrapper[4803]: I0127 22:16:12.380157 4803 generic.go:334] "Generic (PLEG): container finished" podID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerID="9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2" exitCode=0 Jan 27 22:16:12 crc kubenswrapper[4803]: I0127 22:16:12.380188 4803 generic.go:334] "Generic (PLEG): container finished" podID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerID="48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58" exitCode=0 Jan 27 22:16:12 crc kubenswrapper[4803]: I0127 22:16:12.380206 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1914e01-7a22-4771-b16b-d54d6c902b67","Type":"ContainerDied","Data":"9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2"} Jan 27 22:16:12 crc kubenswrapper[4803]: I0127 22:16:12.380230 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1914e01-7a22-4771-b16b-d54d6c902b67","Type":"ContainerDied","Data":"48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58"} Jan 27 22:16:12 crc kubenswrapper[4803]: W0127 22:16:12.644417 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod989f334d_f101_4247_9465_d4bf4c4732b8.slice/crio-e951b563920fa41e10370b2ab3b6570c9c7e72de5c669fefe026abdbe4fe98bb WatchSource:0}: Error finding container e951b563920fa41e10370b2ab3b6570c9c7e72de5c669fefe026abdbe4fe98bb: Status 404 returned error can't find the container with id e951b563920fa41e10370b2ab3b6570c9c7e72de5c669fefe026abdbe4fe98bb Jan 27 22:16:12 crc kubenswrapper[4803]: I0127 22:16:12.653463 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9"] Jan 27 22:16:12 crc kubenswrapper[4803]: I0127 22:16:12.820059 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.306758 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:16:13 crc kubenswrapper[4803]: E0127 22:16:13.307277 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.390870 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" event={"ID":"989f334d-f101-4247-9465-d4bf4c4732b8","Type":"ContainerStarted","Data":"e951b563920fa41e10370b2ab3b6570c9c7e72de5c669fefe026abdbe4fe98bb"} Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.393781 4803 generic.go:334] "Generic (PLEG): container finished" podID="552f794c-b47b-4f78-9f79-d989e7b621d7" containerID="09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" exitCode=0 Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.393811 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" event={"ID":"552f794c-b47b-4f78-9f79-d989e7b621d7","Type":"ContainerDied","Data":"09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab"} Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.393834 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" event={"ID":"552f794c-b47b-4f78-9f79-d989e7b621d7","Type":"ContainerDied","Data":"6e14a98b725f9ebdc7dd8a725c70b133134f40b82341eda4a7acf00aa786780e"} Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.393860 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e14a98b725f9ebdc7dd8a725c70b133134f40b82341eda4a7acf00aa786780e" Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.424565 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.557704 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data\") pod \"552f794c-b47b-4f78-9f79-d989e7b621d7\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.557785 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data-custom\") pod \"552f794c-b47b-4f78-9f79-d989e7b621d7\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.558128 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-combined-ca-bundle\") pod \"552f794c-b47b-4f78-9f79-d989e7b621d7\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.558190 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmg64\" (UniqueName: \"kubernetes.io/projected/552f794c-b47b-4f78-9f79-d989e7b621d7-kube-api-access-wmg64\") pod \"552f794c-b47b-4f78-9f79-d989e7b621d7\" (UID: \"552f794c-b47b-4f78-9f79-d989e7b621d7\") " Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.563869 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552f794c-b47b-4f78-9f79-d989e7b621d7-kube-api-access-wmg64" (OuterVolumeSpecName: "kube-api-access-wmg64") pod "552f794c-b47b-4f78-9f79-d989e7b621d7" (UID: "552f794c-b47b-4f78-9f79-d989e7b621d7"). InnerVolumeSpecName "kube-api-access-wmg64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.564978 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "552f794c-b47b-4f78-9f79-d989e7b621d7" (UID: "552f794c-b47b-4f78-9f79-d989e7b621d7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.595896 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "552f794c-b47b-4f78-9f79-d989e7b621d7" (UID: "552f794c-b47b-4f78-9f79-d989e7b621d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.628149 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data" (OuterVolumeSpecName: "config-data") pod "552f794c-b47b-4f78-9f79-d989e7b621d7" (UID: "552f794c-b47b-4f78-9f79-d989e7b621d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.660735 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.660774 4803 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.660786 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552f794c-b47b-4f78-9f79-d989e7b621d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:13 crc kubenswrapper[4803]: I0127 22:16:13.660796 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmg64\" (UniqueName: \"kubernetes.io/projected/552f794c-b47b-4f78-9f79-d989e7b621d7-kube-api-access-wmg64\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:14 crc kubenswrapper[4803]: I0127 22:16:14.407840 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cfbfb9f4d-z24kh" Jan 27 22:16:14 crc kubenswrapper[4803]: I0127 22:16:14.407878 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" event={"ID":"989f334d-f101-4247-9465-d4bf4c4732b8","Type":"ContainerStarted","Data":"db83d64a278869a5aacd8a55e0ac12ff71d8315096b4b4ead6046e9234f90af5"} Jan 27 22:16:14 crc kubenswrapper[4803]: I0127 22:16:14.431310 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" podStartSLOduration=2.914543536 podStartE2EDuration="3.431291291s" podCreationTimestamp="2026-01-27 22:16:11 +0000 UTC" firstStartedPulling="2026-01-27 22:16:12.648066979 +0000 UTC m=+1725.064088678" lastFinishedPulling="2026-01-27 22:16:13.164814734 +0000 UTC m=+1725.580836433" observedRunningTime="2026-01-27 22:16:14.421367373 +0000 UTC m=+1726.837389072" watchObservedRunningTime="2026-01-27 22:16:14.431291291 +0000 UTC m=+1726.847312980" Jan 27 22:16:14 crc kubenswrapper[4803]: I0127 22:16:14.445872 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7cfbfb9f4d-z24kh"] Jan 27 22:16:14 crc kubenswrapper[4803]: I0127 22:16:14.455793 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7cfbfb9f4d-z24kh"] Jan 27 22:16:15 crc kubenswrapper[4803]: I0127 22:16:15.420021 4803 generic.go:334] "Generic (PLEG): container finished" podID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerID="3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487" exitCode=0 Jan 27 22:16:15 crc kubenswrapper[4803]: I0127 22:16:15.420106 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1914e01-7a22-4771-b16b-d54d6c902b67","Type":"ContainerDied","Data":"3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487"} Jan 27 22:16:15 crc kubenswrapper[4803]: I0127 22:16:15.671122 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="50e2e860-a414-4c3e-888e-ac5873f13d2d" containerName="rabbitmq" containerID="cri-o://2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c" gracePeriod=604796 Jan 27 22:16:16 crc kubenswrapper[4803]: I0127 22:16:16.319712 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552f794c-b47b-4f78-9f79-d989e7b621d7" path="/var/lib/kubelet/pods/552f794c-b47b-4f78-9f79-d989e7b621d7/volumes" Jan 27 22:16:16 crc kubenswrapper[4803]: I0127 22:16:16.432599 4803 generic.go:334] "Generic (PLEG): container finished" podID="989f334d-f101-4247-9465-d4bf4c4732b8" containerID="db83d64a278869a5aacd8a55e0ac12ff71d8315096b4b4ead6046e9234f90af5" exitCode=0 Jan 27 22:16:16 crc kubenswrapper[4803]: I0127 22:16:16.432639 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" event={"ID":"989f334d-f101-4247-9465-d4bf4c4732b8","Type":"ContainerDied","Data":"db83d64a278869a5aacd8a55e0ac12ff71d8315096b4b4ead6046e9234f90af5"} Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.362749 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.445970 4803 generic.go:334] "Generic (PLEG): container finished" podID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerID="16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de" exitCode=0 Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.446058 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.446109 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1914e01-7a22-4771-b16b-d54d6c902b67","Type":"ContainerDied","Data":"16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de"} Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.446142 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1914e01-7a22-4771-b16b-d54d6c902b67","Type":"ContainerDied","Data":"fa200449c054acbb934fd2442d40b433a6a3104eaaaa2d75523447a5fa2f77a1"} Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.446174 4803 scope.go:117] "RemoveContainer" containerID="16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.448323 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmtgc\" (UniqueName: \"kubernetes.io/projected/d1914e01-7a22-4771-b16b-d54d6c902b67-kube-api-access-pmtgc\") pod \"d1914e01-7a22-4771-b16b-d54d6c902b67\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.448484 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-public-tls-certs\") pod \"d1914e01-7a22-4771-b16b-d54d6c902b67\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.448557 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-combined-ca-bundle\") pod \"d1914e01-7a22-4771-b16b-d54d6c902b67\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.448800 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-scripts\") pod \"d1914e01-7a22-4771-b16b-d54d6c902b67\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.448924 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-internal-tls-certs\") pod \"d1914e01-7a22-4771-b16b-d54d6c902b67\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.448992 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-config-data\") pod \"d1914e01-7a22-4771-b16b-d54d6c902b67\" (UID: \"d1914e01-7a22-4771-b16b-d54d6c902b67\") " Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.479634 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1914e01-7a22-4771-b16b-d54d6c902b67-kube-api-access-pmtgc" (OuterVolumeSpecName: "kube-api-access-pmtgc") pod "d1914e01-7a22-4771-b16b-d54d6c902b67" (UID: "d1914e01-7a22-4771-b16b-d54d6c902b67"). InnerVolumeSpecName "kube-api-access-pmtgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.487046 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-scripts" (OuterVolumeSpecName: "scripts") pod "d1914e01-7a22-4771-b16b-d54d6c902b67" (UID: "d1914e01-7a22-4771-b16b-d54d6c902b67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.552738 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.552764 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmtgc\" (UniqueName: \"kubernetes.io/projected/d1914e01-7a22-4771-b16b-d54d6c902b67-kube-api-access-pmtgc\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.559534 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d1914e01-7a22-4771-b16b-d54d6c902b67" (UID: "d1914e01-7a22-4771-b16b-d54d6c902b67"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.575323 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d1914e01-7a22-4771-b16b-d54d6c902b67" (UID: "d1914e01-7a22-4771-b16b-d54d6c902b67"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.645414 4803 scope.go:117] "RemoveContainer" containerID="3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.645585 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-config-data" (OuterVolumeSpecName: "config-data") pod "d1914e01-7a22-4771-b16b-d54d6c902b67" (UID: "d1914e01-7a22-4771-b16b-d54d6c902b67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.646200 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1914e01-7a22-4771-b16b-d54d6c902b67" (UID: "d1914e01-7a22-4771-b16b-d54d6c902b67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.654706 4803 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.654742 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.654751 4803 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.654760 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1914e01-7a22-4771-b16b-d54d6c902b67-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.674888 4803 scope.go:117] "RemoveContainer" containerID="9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.707688 4803 scope.go:117] "RemoveContainer" containerID="48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.735785 4803 scope.go:117] "RemoveContainer" containerID="16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de" Jan 27 22:16:17 crc kubenswrapper[4803]: E0127 22:16:17.738073 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de\": container with ID starting with 16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de not found: ID does not exist" containerID="16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.738152 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de"} err="failed to get container status \"16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de\": rpc error: code = NotFound desc = could not find container \"16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de\": container with ID starting with 16ab98a7c5557284adcbf80bb65459239bf724cf339d7b993b44d64d5d6b23de not found: ID does not exist" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.738181 4803 scope.go:117] "RemoveContainer" containerID="3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487" Jan 27 22:16:17 crc kubenswrapper[4803]: E0127 22:16:17.738796 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487\": container with ID starting with 3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487 not found: ID does not exist" containerID="3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.738833 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487"} err="failed to get container status \"3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487\": rpc error: code = NotFound desc = could not find container \"3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487\": container with ID starting with 3c89d692a0b241d4ffceab927307c90c37c4a64f6af903831d4a424e5600e487 not found: ID does not exist" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.738881 4803 scope.go:117] "RemoveContainer" containerID="9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2" Jan 27 22:16:17 crc kubenswrapper[4803]: E0127 22:16:17.739327 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2\": container with ID starting with 9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2 not found: ID does not exist" containerID="9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.739353 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2"} err="failed to get container status \"9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2\": rpc error: code = NotFound desc = could not find container \"9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2\": container with ID starting with 9cea058302eeb20facd7fa9ffa8eec9a49ddbb5f13c9ae45831a1233b589d2d2 not found: ID does not exist" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.739370 4803 scope.go:117] "RemoveContainer" containerID="48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58" Jan 27 22:16:17 crc kubenswrapper[4803]: E0127 22:16:17.739965 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58\": container with ID starting with 48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58 not found: ID does not exist" containerID="48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.739991 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58"} err="failed to get container status \"48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58\": rpc error: code = NotFound desc = could not find container \"48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58\": container with ID starting with 48e03b47f51647cf35af294bb15c2f90d07d3c5245213cccb7d2c89864e8ff58 not found: ID does not exist" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.796917 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.823702 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.838890 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 22:16:17 crc kubenswrapper[4803]: E0127 22:16:17.839439 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-api" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.839452 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-api" Jan 27 22:16:17 crc kubenswrapper[4803]: E0127 22:16:17.839468 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-evaluator" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.839476 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-evaluator" Jan 27 22:16:17 crc kubenswrapper[4803]: E0127 22:16:17.839490 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-notifier" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.839496 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-notifier" Jan 27 22:16:17 crc kubenswrapper[4803]: E0127 22:16:17.839516 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552f794c-b47b-4f78-9f79-d989e7b621d7" containerName="heat-engine" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.839522 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="552f794c-b47b-4f78-9f79-d989e7b621d7" containerName="heat-engine" Jan 27 22:16:17 crc kubenswrapper[4803]: E0127 22:16:17.839553 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-listener" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.839559 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-listener" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.839784 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="552f794c-b47b-4f78-9f79-d989e7b621d7" containerName="heat-engine" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.839802 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-api" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.839809 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-evaluator" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.839833 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-listener" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.839955 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" containerName="aodh-notifier" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.842085 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.846478 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.846725 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-vtwk7" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.846868 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.846971 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.847217 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.852999 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.865528 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j79vc\" (UniqueName: \"kubernetes.io/projected/8181a2a9-82ef-4176-b0fd-b333b51abb84-kube-api-access-j79vc\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.865617 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-internal-tls-certs\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.865801 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-scripts\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.866002 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-public-tls-certs\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.866033 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-config-data\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.866252 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.968550 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-scripts\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.968635 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-public-tls-certs\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.968658 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-config-data\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.968734 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.968785 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j79vc\" (UniqueName: \"kubernetes.io/projected/8181a2a9-82ef-4176-b0fd-b333b51abb84-kube-api-access-j79vc\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.968857 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-internal-tls-certs\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.972404 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-scripts\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.972611 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.972671 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-internal-tls-certs\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.973181 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-config-data\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.974922 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8181a2a9-82ef-4176-b0fd-b333b51abb84-public-tls-certs\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:17 crc kubenswrapper[4803]: I0127 22:16:17.985923 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j79vc\" (UniqueName: \"kubernetes.io/projected/8181a2a9-82ef-4176-b0fd-b333b51abb84-kube-api-access-j79vc\") pod \"aodh-0\" (UID: \"8181a2a9-82ef-4176-b0fd-b333b51abb84\") " pod="openstack/aodh-0" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.105651 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.169764 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.172501 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjbcb\" (UniqueName: \"kubernetes.io/projected/989f334d-f101-4247-9465-d4bf4c4732b8-kube-api-access-pjbcb\") pod \"989f334d-f101-4247-9465-d4bf4c4732b8\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.172880 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-ssh-key-openstack-edpm-ipam\") pod \"989f334d-f101-4247-9465-d4bf4c4732b8\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.173082 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-inventory\") pod \"989f334d-f101-4247-9465-d4bf4c4732b8\" (UID: \"989f334d-f101-4247-9465-d4bf4c4732b8\") " Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.176101 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/989f334d-f101-4247-9465-d4bf4c4732b8-kube-api-access-pjbcb" (OuterVolumeSpecName: "kube-api-access-pjbcb") pod "989f334d-f101-4247-9465-d4bf4c4732b8" (UID: "989f334d-f101-4247-9465-d4bf4c4732b8"). InnerVolumeSpecName "kube-api-access-pjbcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.209626 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-inventory" (OuterVolumeSpecName: "inventory") pod "989f334d-f101-4247-9465-d4bf4c4732b8" (UID: "989f334d-f101-4247-9465-d4bf4c4732b8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.216345 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "989f334d-f101-4247-9465-d4bf4c4732b8" (UID: "989f334d-f101-4247-9465-d4bf4c4732b8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.276698 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.276981 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/989f334d-f101-4247-9465-d4bf4c4732b8-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.276995 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjbcb\" (UniqueName: \"kubernetes.io/projected/989f334d-f101-4247-9465-d4bf4c4732b8-kube-api-access-pjbcb\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.345214 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1914e01-7a22-4771-b16b-d54d6c902b67" path="/var/lib/kubelet/pods/d1914e01-7a22-4771-b16b-d54d6c902b67/volumes" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.459172 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.459190 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-j26c9" event={"ID":"989f334d-f101-4247-9465-d4bf4c4732b8","Type":"ContainerDied","Data":"e951b563920fa41e10370b2ab3b6570c9c7e72de5c669fefe026abdbe4fe98bb"} Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.459258 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e951b563920fa41e10370b2ab3b6570c9c7e72de5c669fefe026abdbe4fe98bb" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.523638 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f"] Jan 27 22:16:18 crc kubenswrapper[4803]: E0127 22:16:18.524386 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989f334d-f101-4247-9465-d4bf4c4732b8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.524405 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="989f334d-f101-4247-9465-d4bf4c4732b8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.525064 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="989f334d-f101-4247-9465-d4bf4c4732b8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.527017 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.530132 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.530617 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.530709 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.530785 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.550597 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f"] Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.583194 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.583293 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.583356 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.583446 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4msnp\" (UniqueName: \"kubernetes.io/projected/e95bd3a3-5cb5-47c7-906d-addca2c174a3-kube-api-access-4msnp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.685790 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.686319 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.686445 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4msnp\" (UniqueName: \"kubernetes.io/projected/e95bd3a3-5cb5-47c7-906d-addca2c174a3-kube-api-access-4msnp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.686601 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.693413 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.693532 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.696558 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.704329 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.708079 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4msnp\" (UniqueName: \"kubernetes.io/projected/e95bd3a3-5cb5-47c7-906d-addca2c174a3-kube-api-access-4msnp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:18 crc kubenswrapper[4803]: I0127 22:16:18.848178 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:16:19 crc kubenswrapper[4803]: W0127 22:16:19.428898 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode95bd3a3_5cb5_47c7_906d_addca2c174a3.slice/crio-305860296b807be4cce01cc8a98477d495944d1be6ec8bae81f7dcd232156367 WatchSource:0}: Error finding container 305860296b807be4cce01cc8a98477d495944d1be6ec8bae81f7dcd232156367: Status 404 returned error can't find the container with id 305860296b807be4cce01cc8a98477d495944d1be6ec8bae81f7dcd232156367 Jan 27 22:16:19 crc kubenswrapper[4803]: I0127 22:16:19.431276 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f"] Jan 27 22:16:19 crc kubenswrapper[4803]: I0127 22:16:19.483178 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" event={"ID":"e95bd3a3-5cb5-47c7-906d-addca2c174a3","Type":"ContainerStarted","Data":"305860296b807be4cce01cc8a98477d495944d1be6ec8bae81f7dcd232156367"} Jan 27 22:16:19 crc kubenswrapper[4803]: I0127 22:16:19.488068 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8181a2a9-82ef-4176-b0fd-b333b51abb84","Type":"ContainerStarted","Data":"225fc6f8027b0121857e816d5e73b7473eb5250e48d78d5bcbde2d9127235105"} Jan 27 22:16:19 crc kubenswrapper[4803]: I0127 22:16:19.488117 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8181a2a9-82ef-4176-b0fd-b333b51abb84","Type":"ContainerStarted","Data":"c99a72e3fce611896f031aa9193aed8a4acdef487f32e0908d19e1836f014bdd"} Jan 27 22:16:20 crc kubenswrapper[4803]: I0127 22:16:20.506911 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" event={"ID":"e95bd3a3-5cb5-47c7-906d-addca2c174a3","Type":"ContainerStarted","Data":"210a209b60659a8b2ed4e98a2a6ab254ba378aca654f0ede588e3d2380dca88b"} Jan 27 22:16:20 crc kubenswrapper[4803]: I0127 22:16:20.509269 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8181a2a9-82ef-4176-b0fd-b333b51abb84","Type":"ContainerStarted","Data":"8dbdee240754d14f6d459b32a93513b0c10672cd0bdc9491e37b5c274cb8c7a5"} Jan 27 22:16:20 crc kubenswrapper[4803]: I0127 22:16:20.533215 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" podStartSLOduration=1.972295092 podStartE2EDuration="2.533195143s" podCreationTimestamp="2026-01-27 22:16:18 +0000 UTC" firstStartedPulling="2026-01-27 22:16:19.448893818 +0000 UTC m=+1731.864915537" lastFinishedPulling="2026-01-27 22:16:20.009793889 +0000 UTC m=+1732.425815588" observedRunningTime="2026-01-27 22:16:20.532158585 +0000 UTC m=+1732.948180304" watchObservedRunningTime="2026-01-27 22:16:20.533195143 +0000 UTC m=+1732.949216842" Jan 27 22:16:21 crc kubenswrapper[4803]: I0127 22:16:21.553805 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8181a2a9-82ef-4176-b0fd-b333b51abb84","Type":"ContainerStarted","Data":"e37913629d48eb6890da8b914a716bfbf9068ef44a7de68006218a982a9c2bad"} Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.499992 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.578210 4803 generic.go:334] "Generic (PLEG): container finished" podID="50e2e860-a414-4c3e-888e-ac5873f13d2d" containerID="2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c" exitCode=0 Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.578264 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"50e2e860-a414-4c3e-888e-ac5873f13d2d","Type":"ContainerDied","Data":"2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c"} Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.578298 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"50e2e860-a414-4c3e-888e-ac5873f13d2d","Type":"ContainerDied","Data":"322ea0c137cc48a65e3b48eff23bea9203168f789fa9365953812a72aae7be22"} Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.578316 4803 scope.go:117] "RemoveContainer" containerID="2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.578368 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.596784 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\") pod \"50e2e860-a414-4c3e-888e-ac5873f13d2d\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.598563 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50e2e860-a414-4c3e-888e-ac5873f13d2d-pod-info\") pod \"50e2e860-a414-4c3e-888e-ac5873f13d2d\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.598742 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2szl\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-kube-api-access-t2szl\") pod \"50e2e860-a414-4c3e-888e-ac5873f13d2d\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.598774 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-erlang-cookie\") pod \"50e2e860-a414-4c3e-888e-ac5873f13d2d\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.598825 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50e2e860-a414-4c3e-888e-ac5873f13d2d-erlang-cookie-secret\") pod \"50e2e860-a414-4c3e-888e-ac5873f13d2d\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.598875 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-plugins\") pod \"50e2e860-a414-4c3e-888e-ac5873f13d2d\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.599018 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-tls\") pod \"50e2e860-a414-4c3e-888e-ac5873f13d2d\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.599075 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-server-conf\") pod \"50e2e860-a414-4c3e-888e-ac5873f13d2d\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.599133 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-config-data\") pod \"50e2e860-a414-4c3e-888e-ac5873f13d2d\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.599165 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-plugins-conf\") pod \"50e2e860-a414-4c3e-888e-ac5873f13d2d\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.599193 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-confd\") pod \"50e2e860-a414-4c3e-888e-ac5873f13d2d\" (UID: \"50e2e860-a414-4c3e-888e-ac5873f13d2d\") " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.601756 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "50e2e860-a414-4c3e-888e-ac5873f13d2d" (UID: "50e2e860-a414-4c3e-888e-ac5873f13d2d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.602550 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "50e2e860-a414-4c3e-888e-ac5873f13d2d" (UID: "50e2e860-a414-4c3e-888e-ac5873f13d2d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.603238 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "50e2e860-a414-4c3e-888e-ac5873f13d2d" (UID: "50e2e860-a414-4c3e-888e-ac5873f13d2d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.618291 4803 scope.go:117] "RemoveContainer" containerID="c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.620345 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-kube-api-access-t2szl" (OuterVolumeSpecName: "kube-api-access-t2szl") pod "50e2e860-a414-4c3e-888e-ac5873f13d2d" (UID: "50e2e860-a414-4c3e-888e-ac5873f13d2d"). InnerVolumeSpecName "kube-api-access-t2szl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.634339 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "50e2e860-a414-4c3e-888e-ac5873f13d2d" (UID: "50e2e860-a414-4c3e-888e-ac5873f13d2d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.636653 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e2e860-a414-4c3e-888e-ac5873f13d2d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "50e2e860-a414-4c3e-888e-ac5873f13d2d" (UID: "50e2e860-a414-4c3e-888e-ac5873f13d2d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.637413 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/50e2e860-a414-4c3e-888e-ac5873f13d2d-pod-info" (OuterVolumeSpecName: "pod-info") pod "50e2e860-a414-4c3e-888e-ac5873f13d2d" (UID: "50e2e860-a414-4c3e-888e-ac5873f13d2d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.696306 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1" (OuterVolumeSpecName: "persistence") pod "50e2e860-a414-4c3e-888e-ac5873f13d2d" (UID: "50e2e860-a414-4c3e-888e-ac5873f13d2d"). InnerVolumeSpecName "pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.704008 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.704037 4803 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.704062 4803 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\") on node \"crc\" " Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.704073 4803 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50e2e860-a414-4c3e-888e-ac5873f13d2d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.704083 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2szl\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-kube-api-access-t2szl\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.704093 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.704102 4803 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50e2e860-a414-4c3e-888e-ac5873f13d2d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.704110 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.713512 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-server-conf" (OuterVolumeSpecName: "server-conf") pod "50e2e860-a414-4c3e-888e-ac5873f13d2d" (UID: "50e2e860-a414-4c3e-888e-ac5873f13d2d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.741650 4803 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.741812 4803 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1") on node "crc" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.742125 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-config-data" (OuterVolumeSpecName: "config-data") pod "50e2e860-a414-4c3e-888e-ac5873f13d2d" (UID: "50e2e860-a414-4c3e-888e-ac5873f13d2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.798987 4803 scope.go:117] "RemoveContainer" containerID="2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c" Jan 27 22:16:22 crc kubenswrapper[4803]: E0127 22:16:22.806503 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c\": container with ID starting with 2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c not found: ID does not exist" containerID="2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.806587 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c"} err="failed to get container status \"2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c\": rpc error: code = NotFound desc = could not find container \"2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c\": container with ID starting with 2c35ff7b1c584fa8f2b8b6fade7fd3f5fa549997ebfd4903b1e3164e6908ff8c not found: ID does not exist" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.806623 4803 scope.go:117] "RemoveContainer" containerID="c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.806922 4803 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.806944 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50e2e860-a414-4c3e-888e-ac5873f13d2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.806955 4803 reconciler_common.go:293] "Volume detached for volume \"pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.806934 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "50e2e860-a414-4c3e-888e-ac5873f13d2d" (UID: "50e2e860-a414-4c3e-888e-ac5873f13d2d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:16:22 crc kubenswrapper[4803]: E0127 22:16:22.807292 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd\": container with ID starting with c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd not found: ID does not exist" containerID="c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.807347 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd"} err="failed to get container status \"c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd\": rpc error: code = NotFound desc = could not find container \"c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd\": container with ID starting with c6368d2f60f25db161f1478ffbf2cfd68e9f1c4a4837a489d521c30c0c9edfcd not found: ID does not exist" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.909985 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50e2e860-a414-4c3e-888e-ac5873f13d2d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.914690 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.928587 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.948136 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 22:16:22 crc kubenswrapper[4803]: E0127 22:16:22.948883 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e2e860-a414-4c3e-888e-ac5873f13d2d" containerName="rabbitmq" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.949050 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e2e860-a414-4c3e-888e-ac5873f13d2d" containerName="rabbitmq" Jan 27 22:16:22 crc kubenswrapper[4803]: E0127 22:16:22.949144 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e2e860-a414-4c3e-888e-ac5873f13d2d" containerName="setup-container" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.949199 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e2e860-a414-4c3e-888e-ac5873f13d2d" containerName="setup-container" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.949505 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e2e860-a414-4c3e-888e-ac5873f13d2d" containerName="rabbitmq" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.951057 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 22:16:22 crc kubenswrapper[4803]: I0127 22:16:22.967408 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.116120 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.116204 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.116264 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.116358 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/841c6d24-8f9e-401f-8045-0e76e7d93754-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.116395 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/841c6d24-8f9e-401f-8045-0e76e7d93754-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.116423 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/841c6d24-8f9e-401f-8045-0e76e7d93754-pod-info\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.116469 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/841c6d24-8f9e-401f-8045-0e76e7d93754-config-data\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.116510 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.116580 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8f92\" (UniqueName: \"kubernetes.io/projected/841c6d24-8f9e-401f-8045-0e76e7d93754-kube-api-access-p8f92\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.116621 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/841c6d24-8f9e-401f-8045-0e76e7d93754-server-conf\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.116641 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.217664 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.217730 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8f92\" (UniqueName: \"kubernetes.io/projected/841c6d24-8f9e-401f-8045-0e76e7d93754-kube-api-access-p8f92\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.217766 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/841c6d24-8f9e-401f-8045-0e76e7d93754-server-conf\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.217784 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.217855 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.217894 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.217949 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.217999 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/841c6d24-8f9e-401f-8045-0e76e7d93754-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.218021 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/841c6d24-8f9e-401f-8045-0e76e7d93754-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.218033 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/841c6d24-8f9e-401f-8045-0e76e7d93754-pod-info\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.218077 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/841c6d24-8f9e-401f-8045-0e76e7d93754-config-data\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.219235 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/841c6d24-8f9e-401f-8045-0e76e7d93754-config-data\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.219708 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/841c6d24-8f9e-401f-8045-0e76e7d93754-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.220345 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/841c6d24-8f9e-401f-8045-0e76e7d93754-server-conf\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.220539 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.220592 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.222289 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/841c6d24-8f9e-401f-8045-0e76e7d93754-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.222958 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.223227 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/841c6d24-8f9e-401f-8045-0e76e7d93754-pod-info\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.223369 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/841c6d24-8f9e-401f-8045-0e76e7d93754-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.223890 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.223923 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1dd4a31266194ed34fd80142b7bb117a8dffded2c221ac334a264cd95330634/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.239282 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8f92\" (UniqueName: \"kubernetes.io/projected/841c6d24-8f9e-401f-8045-0e76e7d93754-kube-api-access-p8f92\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.315393 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6aa55d67-4060-4c80-a7b5-8a53e3b449d1\") pod \"rabbitmq-server-1\" (UID: \"841c6d24-8f9e-401f-8045-0e76e7d93754\") " pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.580221 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.594535 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8181a2a9-82ef-4176-b0fd-b333b51abb84","Type":"ContainerStarted","Data":"61f05b0b968fc46a6a102407b7ea3548a9de50707e6a7e78031cfd6c6172d79d"} Jan 27 22:16:23 crc kubenswrapper[4803]: I0127 22:16:23.629641 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.9814097569999998 podStartE2EDuration="6.629613498s" podCreationTimestamp="2026-01-27 22:16:17 +0000 UTC" firstStartedPulling="2026-01-27 22:16:18.701054066 +0000 UTC m=+1731.117075775" lastFinishedPulling="2026-01-27 22:16:22.349257817 +0000 UTC m=+1734.765279516" observedRunningTime="2026-01-27 22:16:23.619131135 +0000 UTC m=+1736.035152844" watchObservedRunningTime="2026-01-27 22:16:23.629613498 +0000 UTC m=+1736.045635207" Jan 27 22:16:24 crc kubenswrapper[4803]: I0127 22:16:24.253247 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 22:16:24 crc kubenswrapper[4803]: I0127 22:16:24.308315 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:16:24 crc kubenswrapper[4803]: E0127 22:16:24.308805 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:16:24 crc kubenswrapper[4803]: I0127 22:16:24.344684 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e2e860-a414-4c3e-888e-ac5873f13d2d" path="/var/lib/kubelet/pods/50e2e860-a414-4c3e-888e-ac5873f13d2d/volumes" Jan 27 22:16:24 crc kubenswrapper[4803]: I0127 22:16:24.605272 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"841c6d24-8f9e-401f-8045-0e76e7d93754","Type":"ContainerStarted","Data":"132db7dd1a10207a8b2e63093f1b003a8b513cf62491e08a559e35bbce910772"} Jan 27 22:16:26 crc kubenswrapper[4803]: I0127 22:16:26.626558 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"841c6d24-8f9e-401f-8045-0e76e7d93754","Type":"ContainerStarted","Data":"df1f057586a7930eafe87afdf31a00a5b8a56301fab69268e22af9e913cd5f50"} Jan 27 22:16:30 crc kubenswrapper[4803]: I0127 22:16:30.776747 4803 scope.go:117] "RemoveContainer" containerID="114ca0918f39cb176817e5b6eef554e2a7b7fa226bf66ee4163d2dfa5514fd40" Jan 27 22:16:30 crc kubenswrapper[4803]: I0127 22:16:30.806742 4803 scope.go:117] "RemoveContainer" containerID="35213ba6fcff817ece2d58f0ad20116c4c080029024f22b6cb2454e2ce320988" Jan 27 22:16:30 crc kubenswrapper[4803]: I0127 22:16:30.866326 4803 scope.go:117] "RemoveContainer" containerID="0df0fa41de09f9d42ff3d143051bd60fcb1927165b4af21f9d910db4c20c28bd" Jan 27 22:16:30 crc kubenswrapper[4803]: I0127 22:16:30.929278 4803 scope.go:117] "RemoveContainer" containerID="82bfc22bb9db9ea4a1413029c6f7c8b61c687318ccf27836bc6fc4b414138873" Jan 27 22:16:30 crc kubenswrapper[4803]: I0127 22:16:30.993454 4803 scope.go:117] "RemoveContainer" containerID="37660c0d8c80dda9bc70f19659f95645026b85aeba3541cd465b60b07560be08" Jan 27 22:16:31 crc kubenswrapper[4803]: I0127 22:16:31.067986 4803 scope.go:117] "RemoveContainer" containerID="7dbaa563d0a7019e5b80f922c0893e8cddb470f7154dd9146728f8a5b5c06a9e" Jan 27 22:16:31 crc kubenswrapper[4803]: I0127 22:16:31.121006 4803 scope.go:117] "RemoveContainer" containerID="c5a3ecc082d0bd45b33fb4d378b55ad449b2c0268808df488b266d8add88c35e" Jan 27 22:16:38 crc kubenswrapper[4803]: I0127 22:16:38.308049 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:16:38 crc kubenswrapper[4803]: E0127 22:16:38.308937 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:16:50 crc kubenswrapper[4803]: I0127 22:16:50.307935 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:16:50 crc kubenswrapper[4803]: E0127 22:16:50.308701 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:16:59 crc kubenswrapper[4803]: I0127 22:16:59.009265 4803 generic.go:334] "Generic (PLEG): container finished" podID="841c6d24-8f9e-401f-8045-0e76e7d93754" containerID="df1f057586a7930eafe87afdf31a00a5b8a56301fab69268e22af9e913cd5f50" exitCode=0 Jan 27 22:16:59 crc kubenswrapper[4803]: I0127 22:16:59.009743 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"841c6d24-8f9e-401f-8045-0e76e7d93754","Type":"ContainerDied","Data":"df1f057586a7930eafe87afdf31a00a5b8a56301fab69268e22af9e913cd5f50"} Jan 27 22:17:00 crc kubenswrapper[4803]: I0127 22:17:00.021725 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"841c6d24-8f9e-401f-8045-0e76e7d93754","Type":"ContainerStarted","Data":"f37f54228e22bf3513c02eb5838c87850bacffc55aa9583c7a6f38bb46dff03c"} Jan 27 22:17:00 crc kubenswrapper[4803]: I0127 22:17:00.022615 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 27 22:17:00 crc kubenswrapper[4803]: I0127 22:17:00.053653 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=38.053635865 podStartE2EDuration="38.053635865s" podCreationTimestamp="2026-01-27 22:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:17:00.048912118 +0000 UTC m=+1772.464933827" watchObservedRunningTime="2026-01-27 22:17:00.053635865 +0000 UTC m=+1772.469657564" Jan 27 22:17:04 crc kubenswrapper[4803]: I0127 22:17:04.306777 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:17:04 crc kubenswrapper[4803]: E0127 22:17:04.307711 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:17:13 crc kubenswrapper[4803]: I0127 22:17:13.585303 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 27 22:17:13 crc kubenswrapper[4803]: I0127 22:17:13.694482 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 22:17:17 crc kubenswrapper[4803]: I0127 22:17:17.905748 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="254b4a13-ff42-41cb-ae18-373ad9cfc583" containerName="rabbitmq" containerID="cri-o://8ec67556886515f6008a9fa50849706d9289834f472c1d9b14eb5cee98a8b6cc" gracePeriod=604796 Jan 27 22:17:19 crc kubenswrapper[4803]: I0127 22:17:19.307100 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:17:19 crc kubenswrapper[4803]: E0127 22:17:19.309049 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:17:22 crc kubenswrapper[4803]: I0127 22:17:22.875227 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="254b4a13-ff42-41cb-ae18-373ad9cfc583" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.325631 4803 generic.go:334] "Generic (PLEG): container finished" podID="254b4a13-ff42-41cb-ae18-373ad9cfc583" containerID="8ec67556886515f6008a9fa50849706d9289834f472c1d9b14eb5cee98a8b6cc" exitCode=0 Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.334197 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"254b4a13-ff42-41cb-ae18-373ad9cfc583","Type":"ContainerDied","Data":"8ec67556886515f6008a9fa50849706d9289834f472c1d9b14eb5cee98a8b6cc"} Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.672239 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.863708 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-tls\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.863764 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w75ct\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-kube-api-access-w75ct\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.863790 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-server-conf\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.863821 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-config-data\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.863881 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-confd\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.864526 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.864841 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/254b4a13-ff42-41cb-ae18-373ad9cfc583-pod-info\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.864908 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-erlang-cookie\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.865007 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-plugins-conf\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.865049 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/254b4a13-ff42-41cb-ae18-373ad9cfc583-erlang-cookie-secret\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.865102 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-plugins\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.866216 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.866380 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.867189 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.874543 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254b4a13-ff42-41cb-ae18-373ad9cfc583-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.876871 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/254b4a13-ff42-41cb-ae18-373ad9cfc583-pod-info" (OuterVolumeSpecName: "pod-info") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.881115 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.884751 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-kube-api-access-w75ct" (OuterVolumeSpecName: "kube-api-access-w75ct") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583"). InnerVolumeSpecName "kube-api-access-w75ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.930529 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-config-data" (OuterVolumeSpecName: "config-data") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:17:24 crc kubenswrapper[4803]: E0127 22:17:24.936228 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2 podName:254b4a13-ff42-41cb-ae18-373ad9cfc583 nodeName:}" failed. No retries permitted until 2026-01-27 22:17:25.436200631 +0000 UTC m=+1797.852222330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.960409 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-server-conf" (OuterVolumeSpecName: "server-conf") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.968028 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.968058 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w75ct\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-kube-api-access-w75ct\") on node \"crc\" DevicePath \"\"" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.968069 4803 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.968078 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.968086 4803 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/254b4a13-ff42-41cb-ae18-373ad9cfc583-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.968095 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.968105 4803 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/254b4a13-ff42-41cb-ae18-373ad9cfc583-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.968116 4803 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/254b4a13-ff42-41cb-ae18-373ad9cfc583-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 22:17:24 crc kubenswrapper[4803]: I0127 22:17:24.968128 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.028807 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.070938 4803 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/254b4a13-ff42-41cb-ae18-373ad9cfc583-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.339610 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"254b4a13-ff42-41cb-ae18-373ad9cfc583","Type":"ContainerDied","Data":"a57ab5a747bb434ba331cf5f68873fd57ca318b6c2bb40fb6da46558fef0f2b8"} Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.339682 4803 scope.go:117] "RemoveContainer" containerID="8ec67556886515f6008a9fa50849706d9289834f472c1d9b14eb5cee98a8b6cc" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.339680 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.365906 4803 scope.go:117] "RemoveContainer" containerID="ca8197e506a06cf62307479ac31e9ea0d6627d531e6aead1b3345820efde09db" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.480030 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") pod \"254b4a13-ff42-41cb-ae18-373ad9cfc583\" (UID: \"254b4a13-ff42-41cb-ae18-373ad9cfc583\") " Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.532899 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2" (OuterVolumeSpecName: "persistence") pod "254b4a13-ff42-41cb-ae18-373ad9cfc583" (UID: "254b4a13-ff42-41cb-ae18-373ad9cfc583"). InnerVolumeSpecName "pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.583072 4803 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") on node \"crc\" " Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.609805 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.629982 4803 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.630161 4803 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2") on node "crc" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.637648 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.649779 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 22:17:25 crc kubenswrapper[4803]: E0127 22:17:25.650287 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254b4a13-ff42-41cb-ae18-373ad9cfc583" containerName="rabbitmq" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.650305 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="254b4a13-ff42-41cb-ae18-373ad9cfc583" containerName="rabbitmq" Jan 27 22:17:25 crc kubenswrapper[4803]: E0127 22:17:25.650329 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254b4a13-ff42-41cb-ae18-373ad9cfc583" containerName="setup-container" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.650353 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="254b4a13-ff42-41cb-ae18-373ad9cfc583" containerName="setup-container" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.650573 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="254b4a13-ff42-41cb-ae18-373ad9cfc583" containerName="rabbitmq" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.652371 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.669027 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.684789 4803 reconciler_common.go:293] "Volume detached for volume \"pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") on node \"crc\" DevicePath \"\"" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.786366 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpbvf\" (UniqueName: \"kubernetes.io/projected/d2af9573-4bb0-4528-a405-959329fbe7d7-kube-api-access-wpbvf\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.786423 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2af9573-4bb0-4528-a405-959329fbe7d7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.786511 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2af9573-4bb0-4528-a405-959329fbe7d7-config-data\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.786542 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.786596 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.786640 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2af9573-4bb0-4528-a405-959329fbe7d7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.786711 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.786904 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.786948 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.787107 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2af9573-4bb0-4528-a405-959329fbe7d7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.787138 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2af9573-4bb0-4528-a405-959329fbe7d7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.889053 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.889097 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.889171 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2af9573-4bb0-4528-a405-959329fbe7d7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.889188 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2af9573-4bb0-4528-a405-959329fbe7d7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.889220 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpbvf\" (UniqueName: \"kubernetes.io/projected/d2af9573-4bb0-4528-a405-959329fbe7d7-kube-api-access-wpbvf\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.889245 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2af9573-4bb0-4528-a405-959329fbe7d7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.889276 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2af9573-4bb0-4528-a405-959329fbe7d7-config-data\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.889370 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.889416 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.889456 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2af9573-4bb0-4528-a405-959329fbe7d7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.889478 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.890243 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2af9573-4bb0-4528-a405-959329fbe7d7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.890314 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2af9573-4bb0-4528-a405-959329fbe7d7-config-data\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.890781 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.891004 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.891213 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2af9573-4bb0-4528-a405-959329fbe7d7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.891535 4803 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.891560 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a4e240b2c50a8a372898adfdb57e49d491cca8373a1fb16c49708c9e8f1afc73/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.896804 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2af9573-4bb0-4528-a405-959329fbe7d7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.899125 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.900689 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2af9573-4bb0-4528-a405-959329fbe7d7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.903716 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2af9573-4bb0-4528-a405-959329fbe7d7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.911598 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpbvf\" (UniqueName: \"kubernetes.io/projected/d2af9573-4bb0-4528-a405-959329fbe7d7-kube-api-access-wpbvf\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:25 crc kubenswrapper[4803]: I0127 22:17:25.990187 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a687682-751b-46cc-a9bf-8794dcaa96c2\") pod \"rabbitmq-server-0\" (UID: \"d2af9573-4bb0-4528-a405-959329fbe7d7\") " pod="openstack/rabbitmq-server-0" Jan 27 22:17:26 crc kubenswrapper[4803]: I0127 22:17:26.274102 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 22:17:26 crc kubenswrapper[4803]: I0127 22:17:26.323586 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="254b4a13-ff42-41cb-ae18-373ad9cfc583" path="/var/lib/kubelet/pods/254b4a13-ff42-41cb-ae18-373ad9cfc583/volumes" Jan 27 22:17:26 crc kubenswrapper[4803]: I0127 22:17:26.796712 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 22:17:27 crc kubenswrapper[4803]: I0127 22:17:27.369068 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2af9573-4bb0-4528-a405-959329fbe7d7","Type":"ContainerStarted","Data":"13a8a1f6c5a2ea0eed5a55ce5a7f797a7e3b93d00c4b9554314708eb5eedf8b2"} Jan 27 22:17:29 crc kubenswrapper[4803]: I0127 22:17:29.414653 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2af9573-4bb0-4528-a405-959329fbe7d7","Type":"ContainerStarted","Data":"4189559424d9339930e17b4ad755d18b2b9a00d54df84dc878c10d9c4854b584"} Jan 27 22:17:31 crc kubenswrapper[4803]: I0127 22:17:31.383310 4803 scope.go:117] "RemoveContainer" containerID="744dc1266933be12f4db531dc2df53a58237c8a4f13be5a6231dd0ebbc2d4974" Jan 27 22:17:31 crc kubenswrapper[4803]: I0127 22:17:31.429219 4803 scope.go:117] "RemoveContainer" containerID="3c7e5f1f6a436b9c49d22d3deb23fe42b87023f3289a025ef8b139d422be95b6" Jan 27 22:17:33 crc kubenswrapper[4803]: I0127 22:17:33.307590 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:17:33 crc kubenswrapper[4803]: E0127 22:17:33.308242 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:17:47 crc kubenswrapper[4803]: I0127 22:17:47.307172 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:17:47 crc kubenswrapper[4803]: I0127 22:17:47.629806 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"4cb856bad298c87c22d13858541beda57c61d6cafc0180491d51e1bced258716"} Jan 27 22:18:01 crc kubenswrapper[4803]: I0127 22:18:01.788439 4803 generic.go:334] "Generic (PLEG): container finished" podID="d2af9573-4bb0-4528-a405-959329fbe7d7" containerID="4189559424d9339930e17b4ad755d18b2b9a00d54df84dc878c10d9c4854b584" exitCode=0 Jan 27 22:18:01 crc kubenswrapper[4803]: I0127 22:18:01.788539 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2af9573-4bb0-4528-a405-959329fbe7d7","Type":"ContainerDied","Data":"4189559424d9339930e17b4ad755d18b2b9a00d54df84dc878c10d9c4854b584"} Jan 27 22:18:02 crc kubenswrapper[4803]: I0127 22:18:02.801722 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2af9573-4bb0-4528-a405-959329fbe7d7","Type":"ContainerStarted","Data":"be0a7f1a11c817853714cdef9f14984f19f789ae17a651ecfed577f88c7b2f9f"} Jan 27 22:18:02 crc kubenswrapper[4803]: I0127 22:18:02.802235 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 22:18:02 crc kubenswrapper[4803]: I0127 22:18:02.829995 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.829973668 podStartE2EDuration="37.829973668s" podCreationTimestamp="2026-01-27 22:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:18:02.827313427 +0000 UTC m=+1835.243335146" watchObservedRunningTime="2026-01-27 22:18:02.829973668 +0000 UTC m=+1835.245995367" Jan 27 22:18:16 crc kubenswrapper[4803]: I0127 22:18:16.277333 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 22:18:31 crc kubenswrapper[4803]: I0127 22:18:31.557547 4803 scope.go:117] "RemoveContainer" containerID="af4f5e910378d94e6a1207127ee81bcd1053a61b73a41a5a651e7c092b1502e0" Jan 27 22:18:31 crc kubenswrapper[4803]: I0127 22:18:31.587341 4803 scope.go:117] "RemoveContainer" containerID="f3d49fc150b52ca36b05e5c5f96f6e9924ea37d3d3ee59d60abaeb92cd16709e" Jan 27 22:19:23 crc kubenswrapper[4803]: I0127 22:19:23.244564 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-54c764888c-dpmfw" podUID="912aaad5-2b5b-431b-821f-0ba813a0faaf" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 22:19:31 crc kubenswrapper[4803]: I0127 22:19:31.655729 4803 scope.go:117] "RemoveContainer" containerID="91e6f87023c54ff05031e27b2e720b8d5d7fbd7b9e15e7132d1c3c580fe5a30d" Jan 27 22:19:31 crc kubenswrapper[4803]: I0127 22:19:31.691490 4803 scope.go:117] "RemoveContainer" containerID="d2137644e16498ed9498042acf081b4e24799d067312a5dae03f4d1d622921ad" Jan 27 22:19:31 crc kubenswrapper[4803]: I0127 22:19:31.727729 4803 scope.go:117] "RemoveContainer" containerID="cc56e4fd33dd400b1b4a1bcaa618404d311d59a5357dca918e8fafbb9500d3f8" Jan 27 22:19:31 crc kubenswrapper[4803]: I0127 22:19:31.758043 4803 scope.go:117] "RemoveContainer" containerID="09c4b800274036d5f066087441dcef1974c31b799b71687dacc78b8b83bb06ab" Jan 27 22:19:47 crc kubenswrapper[4803]: I0127 22:19:47.037726 4803 generic.go:334] "Generic (PLEG): container finished" podID="e95bd3a3-5cb5-47c7-906d-addca2c174a3" containerID="210a209b60659a8b2ed4e98a2a6ab254ba378aca654f0ede588e3d2380dca88b" exitCode=0 Jan 27 22:19:47 crc kubenswrapper[4803]: I0127 22:19:47.037805 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" event={"ID":"e95bd3a3-5cb5-47c7-906d-addca2c174a3","Type":"ContainerDied","Data":"210a209b60659a8b2ed4e98a2a6ab254ba378aca654f0ede588e3d2380dca88b"} Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.564261 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.691930 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-bootstrap-combined-ca-bundle\") pod \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.692001 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-ssh-key-openstack-edpm-ipam\") pod \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.692025 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4msnp\" (UniqueName: \"kubernetes.io/projected/e95bd3a3-5cb5-47c7-906d-addca2c174a3-kube-api-access-4msnp\") pod \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.692056 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-inventory\") pod \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\" (UID: \"e95bd3a3-5cb5-47c7-906d-addca2c174a3\") " Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.697642 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e95bd3a3-5cb5-47c7-906d-addca2c174a3-kube-api-access-4msnp" (OuterVolumeSpecName: "kube-api-access-4msnp") pod "e95bd3a3-5cb5-47c7-906d-addca2c174a3" (UID: "e95bd3a3-5cb5-47c7-906d-addca2c174a3"). InnerVolumeSpecName "kube-api-access-4msnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.700308 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e95bd3a3-5cb5-47c7-906d-addca2c174a3" (UID: "e95bd3a3-5cb5-47c7-906d-addca2c174a3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.723333 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-inventory" (OuterVolumeSpecName: "inventory") pod "e95bd3a3-5cb5-47c7-906d-addca2c174a3" (UID: "e95bd3a3-5cb5-47c7-906d-addca2c174a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.725097 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e95bd3a3-5cb5-47c7-906d-addca2c174a3" (UID: "e95bd3a3-5cb5-47c7-906d-addca2c174a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.794873 4803 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.794915 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.794928 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4msnp\" (UniqueName: \"kubernetes.io/projected/e95bd3a3-5cb5-47c7-906d-addca2c174a3-kube-api-access-4msnp\") on node \"crc\" DevicePath \"\"" Jan 27 22:19:48 crc kubenswrapper[4803]: I0127 22:19:48.794941 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e95bd3a3-5cb5-47c7-906d-addca2c174a3-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.074368 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" event={"ID":"e95bd3a3-5cb5-47c7-906d-addca2c174a3","Type":"ContainerDied","Data":"305860296b807be4cce01cc8a98477d495944d1be6ec8bae81f7dcd232156367"} Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.074686 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="305860296b807be4cce01cc8a98477d495944d1be6ec8bae81f7dcd232156367" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.074652 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.165039 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9"] Jan 27 22:19:49 crc kubenswrapper[4803]: E0127 22:19:49.165523 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95bd3a3-5cb5-47c7-906d-addca2c174a3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.165541 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95bd3a3-5cb5-47c7-906d-addca2c174a3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.165756 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95bd3a3-5cb5-47c7-906d-addca2c174a3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.167281 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.170552 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.170560 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.170872 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.173524 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.193749 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9"] Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.205508 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fppg9\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.205644 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fppg9\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.205700 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fk9j\" (UniqueName: \"kubernetes.io/projected/df3f9adb-ad8a-484b-89f7-fb1689886470-kube-api-access-9fk9j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fppg9\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.307246 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fppg9\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.307388 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fppg9\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.307434 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fk9j\" (UniqueName: \"kubernetes.io/projected/df3f9adb-ad8a-484b-89f7-fb1689886470-kube-api-access-9fk9j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fppg9\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.323100 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fppg9\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.324527 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fk9j\" (UniqueName: \"kubernetes.io/projected/df3f9adb-ad8a-484b-89f7-fb1689886470-kube-api-access-9fk9j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fppg9\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.336422 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fppg9\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:19:49 crc kubenswrapper[4803]: I0127 22:19:49.490717 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:19:50 crc kubenswrapper[4803]: I0127 22:19:50.044802 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9"] Jan 27 22:19:50 crc kubenswrapper[4803]: I0127 22:19:50.085269 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" event={"ID":"df3f9adb-ad8a-484b-89f7-fb1689886470","Type":"ContainerStarted","Data":"acba8732204f9975b46ad03ef14076abb1215f5a3411e6900e44f06dd222397e"} Jan 27 22:19:51 crc kubenswrapper[4803]: I0127 22:19:51.055616 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-jchsg"] Jan 27 22:19:51 crc kubenswrapper[4803]: I0127 22:19:51.070892 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4spv4"] Jan 27 22:19:51 crc kubenswrapper[4803]: I0127 22:19:51.084899 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1ee6-account-create-update-kwcbz"] Jan 27 22:19:51 crc kubenswrapper[4803]: I0127 22:19:51.096331 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" event={"ID":"df3f9adb-ad8a-484b-89f7-fb1689886470","Type":"ContainerStarted","Data":"f9a8367258af03e6c28dfc7376d6fed344bc279f8fb0bdb33dd3fe7c6b7df863"} Jan 27 22:19:51 crc kubenswrapper[4803]: I0127 22:19:51.097167 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-jchsg"] Jan 27 22:19:51 crc kubenswrapper[4803]: I0127 22:19:51.116007 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1ee6-account-create-update-kwcbz"] Jan 27 22:19:51 crc kubenswrapper[4803]: I0127 22:19:51.127521 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4spv4"] Jan 27 22:19:51 crc kubenswrapper[4803]: I0127 22:19:51.138236 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2724-account-create-update-5sfc9"] Jan 27 22:19:51 crc kubenswrapper[4803]: I0127 22:19:51.150391 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2724-account-create-update-5sfc9"] Jan 27 22:19:51 crc kubenswrapper[4803]: I0127 22:19:51.158640 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" podStartSLOduration=1.7093486439999999 podStartE2EDuration="2.158626872s" podCreationTimestamp="2026-01-27 22:19:49 +0000 UTC" firstStartedPulling="2026-01-27 22:19:50.045329527 +0000 UTC m=+1942.461351226" lastFinishedPulling="2026-01-27 22:19:50.494607755 +0000 UTC m=+1942.910629454" observedRunningTime="2026-01-27 22:19:51.110037725 +0000 UTC m=+1943.526059454" watchObservedRunningTime="2026-01-27 22:19:51.158626872 +0000 UTC m=+1943.574648571" Jan 27 22:19:52 crc kubenswrapper[4803]: I0127 22:19:52.321554 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37db56ec-494a-417b-9435-a06c024bb779" path="/var/lib/kubelet/pods/37db56ec-494a-417b-9435-a06c024bb779/volumes" Jan 27 22:19:52 crc kubenswrapper[4803]: I0127 22:19:52.324345 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8912c649-5790-40b5-9fae-415ca9dbdc49" path="/var/lib/kubelet/pods/8912c649-5790-40b5-9fae-415ca9dbdc49/volumes" Jan 27 22:19:52 crc kubenswrapper[4803]: I0127 22:19:52.325704 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a95a82c5-cf45-4dee-9891-d0bd2f0e95b9" path="/var/lib/kubelet/pods/a95a82c5-cf45-4dee-9891-d0bd2f0e95b9/volumes" Jan 27 22:19:52 crc kubenswrapper[4803]: I0127 22:19:52.327124 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f46bec-6bde-45cd-ad44-fb2399387ad7" path="/var/lib/kubelet/pods/f0f46bec-6bde-45cd-ad44-fb2399387ad7/volumes" Jan 27 22:19:53 crc kubenswrapper[4803]: I0127 22:19:53.035161 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fd2t7"] Jan 27 22:19:53 crc kubenswrapper[4803]: I0127 22:19:53.047235 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fd2t7"] Jan 27 22:19:54 crc kubenswrapper[4803]: I0127 22:19:54.983190 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc3416a2-788e-417e-9f0e-07f4d5b3c180" path="/var/lib/kubelet/pods/cc3416a2-788e-417e-9f0e-07f4d5b3c180/volumes" Jan 27 22:19:54 crc kubenswrapper[4803]: I0127 22:19:54.985570 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-c39a-account-create-update-k2wlf"] Jan 27 22:19:54 crc kubenswrapper[4803]: I0127 22:19:54.999199 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-c39a-account-create-update-k2wlf"] Jan 27 22:19:56 crc kubenswrapper[4803]: I0127 22:19:56.319670 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96cdfdbb-1c49-46a6-b901-147ad561f0e6" path="/var/lib/kubelet/pods/96cdfdbb-1c49-46a6-b901-147ad561f0e6/volumes" Jan 27 22:19:58 crc kubenswrapper[4803]: I0127 22:19:58.044759 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-qq4dz"] Jan 27 22:19:58 crc kubenswrapper[4803]: I0127 22:19:58.056337 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-df63-account-create-update-k6xrt"] Jan 27 22:19:58 crc kubenswrapper[4803]: I0127 22:19:58.067096 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-qq4dz"] Jan 27 22:19:58 crc kubenswrapper[4803]: I0127 22:19:58.078757 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-df63-account-create-update-k6xrt"] Jan 27 22:19:58 crc kubenswrapper[4803]: I0127 22:19:58.343232 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ccac8b-df21-432b-8026-dbdd520d088c" path="/var/lib/kubelet/pods/04ccac8b-df21-432b-8026-dbdd520d088c/volumes" Jan 27 22:19:58 crc kubenswrapper[4803]: I0127 22:19:58.344819 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dba93f2-5c88-4288-938c-42b786852bbf" path="/var/lib/kubelet/pods/6dba93f2-5c88-4288-938c-42b786852bbf/volumes" Jan 27 22:20:06 crc kubenswrapper[4803]: I0127 22:20:06.031801 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p"] Jan 27 22:20:06 crc kubenswrapper[4803]: I0127 22:20:06.043149 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-e69e-account-create-update-qnj69"] Jan 27 22:20:06 crc kubenswrapper[4803]: I0127 22:20:06.059972 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xnn9p"] Jan 27 22:20:06 crc kubenswrapper[4803]: I0127 22:20:06.070173 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-e69e-account-create-update-qnj69"] Jan 27 22:20:06 crc kubenswrapper[4803]: I0127 22:20:06.322469 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc3413d-60d0-477c-a252-98ac28898260" path="/var/lib/kubelet/pods/bbc3413d-60d0-477c-a252-98ac28898260/volumes" Jan 27 22:20:06 crc kubenswrapper[4803]: I0127 22:20:06.323716 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf306541-6ada-4bf3-8a32-a1de57044cf8" path="/var/lib/kubelet/pods/cf306541-6ada-4bf3-8a32-a1de57044cf8/volumes" Jan 27 22:20:16 crc kubenswrapper[4803]: I0127 22:20:16.343956 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:20:16 crc kubenswrapper[4803]: I0127 22:20:16.344550 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:20:29 crc kubenswrapper[4803]: I0127 22:20:29.129794 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-qptdp"] Jan 27 22:20:29 crc kubenswrapper[4803]: I0127 22:20:29.154561 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-qptdp"] Jan 27 22:20:30 crc kubenswrapper[4803]: I0127 22:20:30.322096 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7065dfd-1cab-471d-9aa5-60cee3714a4e" path="/var/lib/kubelet/pods/a7065dfd-1cab-471d-9aa5-60cee3714a4e/volumes" Jan 27 22:20:31 crc kubenswrapper[4803]: I0127 22:20:31.872446 4803 scope.go:117] "RemoveContainer" containerID="014982ac8c7718c3705ede520e17600202487e7bf067affd5608ad34427786aa" Jan 27 22:20:31 crc kubenswrapper[4803]: I0127 22:20:31.926468 4803 scope.go:117] "RemoveContainer" containerID="30050414a3f0db1cbaaf56b6c91afba96cd438cf759e21d4e8f0b753dc6453de" Jan 27 22:20:31 crc kubenswrapper[4803]: I0127 22:20:31.980975 4803 scope.go:117] "RemoveContainer" containerID="0d923d35d781b4ba319e48a3b2130673ba87c106f175c990a6da1a3757ca74c6" Jan 27 22:20:32 crc kubenswrapper[4803]: I0127 22:20:32.037831 4803 scope.go:117] "RemoveContainer" containerID="f3a66a1868cce646d06342d1a2fcc9c8ea2806a246dd86c4d11a630830a5a44a" Jan 27 22:20:32 crc kubenswrapper[4803]: I0127 22:20:32.091623 4803 scope.go:117] "RemoveContainer" containerID="70241e71e7051c5b95d12de46ceb1ab3094a8d6e53f3379393fdc327fc312048" Jan 27 22:20:32 crc kubenswrapper[4803]: I0127 22:20:32.144268 4803 scope.go:117] "RemoveContainer" containerID="212a6f9d209c2eaf09e83d45bcd7651f3d8e1a78896e0b622f222337d94e0f8c" Jan 27 22:20:32 crc kubenswrapper[4803]: I0127 22:20:32.195999 4803 scope.go:117] "RemoveContainer" containerID="965a900e8bc48c23f0caaca6f059a51a611ae609cf84a2d72b10eb034185e1da" Jan 27 22:20:32 crc kubenswrapper[4803]: I0127 22:20:32.226473 4803 scope.go:117] "RemoveContainer" containerID="3fad64164673658069f6e82800df33ee3f7cc8e466159aa90debab92e4f39637" Jan 27 22:20:32 crc kubenswrapper[4803]: I0127 22:20:32.260077 4803 scope.go:117] "RemoveContainer" containerID="5601a77173e43ec54783427c2ed21a7b96518f169e796d61f2ee7be8be7942db" Jan 27 22:20:32 crc kubenswrapper[4803]: I0127 22:20:32.287901 4803 scope.go:117] "RemoveContainer" containerID="6a35bc65ea7413b3f9d28be260d0207a58d3d2dd4e4cb380c807128058befc91" Jan 27 22:20:32 crc kubenswrapper[4803]: I0127 22:20:32.322778 4803 scope.go:117] "RemoveContainer" containerID="332032bd16283da917921625227e9d0ed3485ca7145ba1a5dd0cf2995db3ced4" Jan 27 22:20:32 crc kubenswrapper[4803]: I0127 22:20:32.345353 4803 scope.go:117] "RemoveContainer" containerID="8c5bdc3a8dfeb255a2227f7412f17ef0966bebf8f441e6a50e045c2c990b2ae0" Jan 27 22:20:34 crc kubenswrapper[4803]: I0127 22:20:34.032342 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-297r9"] Jan 27 22:20:34 crc kubenswrapper[4803]: I0127 22:20:34.043860 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-297r9"] Jan 27 22:20:34 crc kubenswrapper[4803]: I0127 22:20:34.323349 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76260834-5c9e-485d-bbe9-71f319b5a9a6" path="/var/lib/kubelet/pods/76260834-5c9e-485d-bbe9-71f319b5a9a6/volumes" Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.054961 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-335a-account-create-update-bflvb"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.075649 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-91fa-account-create-update-2n55g"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.088301 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bgszm"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.098690 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-91fa-account-create-update-2n55g"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.109213 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-03da-account-create-update-pk298"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.119098 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-335a-account-create-update-bflvb"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.130633 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-bxbff"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.141452 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-03da-account-create-update-pk298"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.152389 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bgszm"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.165730 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-82klm"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.177062 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-bxbff"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.187109 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-krcg6"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.196566 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-82klm"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.206011 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0efc-account-create-update-6nv5m"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.215666 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-krcg6"] Jan 27 22:20:41 crc kubenswrapper[4803]: I0127 22:20:41.227033 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0efc-account-create-update-6nv5m"] Jan 27 22:20:42 crc kubenswrapper[4803]: I0127 22:20:42.321535 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065111e5-7fbf-4d19-b5b6-73fab236781b" path="/var/lib/kubelet/pods/065111e5-7fbf-4d19-b5b6-73fab236781b/volumes" Jan 27 22:20:42 crc kubenswrapper[4803]: I0127 22:20:42.322956 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a1aea96-4bc5-4809-bd77-3d7b319f274a" path="/var/lib/kubelet/pods/0a1aea96-4bc5-4809-bd77-3d7b319f274a/volumes" Jan 27 22:20:42 crc kubenswrapper[4803]: I0127 22:20:42.327134 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0706a6-8dac-4c9e-8d69-04e89e9e0c33" path="/var/lib/kubelet/pods/0c0706a6-8dac-4c9e-8d69-04e89e9e0c33/volumes" Jan 27 22:20:42 crc kubenswrapper[4803]: I0127 22:20:42.329478 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124a1c8a-df45-4295-92cc-cb1708dcd2dc" path="/var/lib/kubelet/pods/124a1c8a-df45-4295-92cc-cb1708dcd2dc/volumes" Jan 27 22:20:42 crc kubenswrapper[4803]: I0127 22:20:42.331320 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2844562f-7d2e-435f-9bf1-58fe118e3345" path="/var/lib/kubelet/pods/2844562f-7d2e-435f-9bf1-58fe118e3345/volumes" Jan 27 22:20:42 crc kubenswrapper[4803]: I0127 22:20:42.333414 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a244c95f-624e-4dca-833a-f290dd3c4465" path="/var/lib/kubelet/pods/a244c95f-624e-4dca-833a-f290dd3c4465/volumes" Jan 27 22:20:42 crc kubenswrapper[4803]: I0127 22:20:42.335816 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f60e00-645d-465c-a973-55f9c9a1f2c1" path="/var/lib/kubelet/pods/c8f60e00-645d-465c-a973-55f9c9a1f2c1/volumes" Jan 27 22:20:42 crc kubenswrapper[4803]: I0127 22:20:42.338108 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0bd3943-fd52-4f19-8d60-b3e7446de42e" path="/var/lib/kubelet/pods/d0bd3943-fd52-4f19-8d60-b3e7446de42e/volumes" Jan 27 22:20:46 crc kubenswrapper[4803]: I0127 22:20:46.343660 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:20:46 crc kubenswrapper[4803]: I0127 22:20:46.344259 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:20:47 crc kubenswrapper[4803]: I0127 22:20:47.030965 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-75867"] Jan 27 22:20:47 crc kubenswrapper[4803]: I0127 22:20:47.041583 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-75867"] Jan 27 22:20:48 crc kubenswrapper[4803]: I0127 22:20:48.324113 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c533a2e0-2bd7-4ffd-8954-f83b562aa811" path="/var/lib/kubelet/pods/c533a2e0-2bd7-4ffd-8954-f83b562aa811/volumes" Jan 27 22:21:15 crc kubenswrapper[4803]: I0127 22:21:15.039942 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jdcs2"] Jan 27 22:21:15 crc kubenswrapper[4803]: I0127 22:21:15.050876 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jdcs2"] Jan 27 22:21:16 crc kubenswrapper[4803]: I0127 22:21:16.322264 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cdc662a-87eb-4af4-916f-fe3746b4a1f0" path="/var/lib/kubelet/pods/8cdc662a-87eb-4af4-916f-fe3746b4a1f0/volumes" Jan 27 22:21:16 crc kubenswrapper[4803]: I0127 22:21:16.343494 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:21:16 crc kubenswrapper[4803]: I0127 22:21:16.343560 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:21:16 crc kubenswrapper[4803]: I0127 22:21:16.343615 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:21:16 crc kubenswrapper[4803]: I0127 22:21:16.344673 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cb856bad298c87c22d13858541beda57c61d6cafc0180491d51e1bced258716"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:21:16 crc kubenswrapper[4803]: I0127 22:21:16.344747 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://4cb856bad298c87c22d13858541beda57c61d6cafc0180491d51e1bced258716" gracePeriod=600 Jan 27 22:21:16 crc kubenswrapper[4803]: I0127 22:21:16.537587 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="4cb856bad298c87c22d13858541beda57c61d6cafc0180491d51e1bced258716" exitCode=0 Jan 27 22:21:16 crc kubenswrapper[4803]: I0127 22:21:16.537628 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"4cb856bad298c87c22d13858541beda57c61d6cafc0180491d51e1bced258716"} Jan 27 22:21:16 crc kubenswrapper[4803]: I0127 22:21:16.537661 4803 scope.go:117] "RemoveContainer" containerID="f4fa0bf690b097b3063d75be9a1a96196ae3826fe277d91601537f347cafc99c" Jan 27 22:21:17 crc kubenswrapper[4803]: I0127 22:21:17.549353 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff"} Jan 27 22:21:28 crc kubenswrapper[4803]: I0127 22:21:28.045118 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ngppz"] Jan 27 22:21:28 crc kubenswrapper[4803]: I0127 22:21:28.057625 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-df2jx"] Jan 27 22:21:28 crc kubenswrapper[4803]: I0127 22:21:28.069340 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-df2jx"] Jan 27 22:21:28 crc kubenswrapper[4803]: I0127 22:21:28.079697 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ngppz"] Jan 27 22:21:28 crc kubenswrapper[4803]: I0127 22:21:28.320054 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca" path="/var/lib/kubelet/pods/17e8a3ad-ebf5-4b45-beed-b3c7e2e083ca/volumes" Jan 27 22:21:28 crc kubenswrapper[4803]: I0127 22:21:28.324143 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1309b4e-8ae9-4e41-ba61-1003d755c889" path="/var/lib/kubelet/pods/c1309b4e-8ae9-4e41-ba61-1003d755c889/volumes" Jan 27 22:21:32 crc kubenswrapper[4803]: I0127 22:21:32.609905 4803 scope.go:117] "RemoveContainer" containerID="05c3391be8c429a9c693ca6b537e30cf8487c781c96c5aaa83d4317cc3b9a20b" Jan 27 22:21:32 crc kubenswrapper[4803]: I0127 22:21:32.639637 4803 scope.go:117] "RemoveContainer" containerID="9649a0ce2c19fe320f87eb38a9da445df5c05d990fc66e06f2f9aa49d45ae697" Jan 27 22:21:32 crc kubenswrapper[4803]: I0127 22:21:32.718076 4803 scope.go:117] "RemoveContainer" containerID="3b1d08e519ce4b18d9bf381e6539c6371c0c131b60eea2e31377809998d47349" Jan 27 22:21:32 crc kubenswrapper[4803]: I0127 22:21:32.777452 4803 scope.go:117] "RemoveContainer" containerID="39c5bc2f05c3aecd54d32f1cb4b7f6a52a0d5714aab8d01a0b46c06c6cb05655" Jan 27 22:21:32 crc kubenswrapper[4803]: I0127 22:21:32.851152 4803 scope.go:117] "RemoveContainer" containerID="c967edab7ae778868b0e850c719e6439de65f198089c3b57bd1bb3ad1fa68104" Jan 27 22:21:32 crc kubenswrapper[4803]: I0127 22:21:32.904053 4803 scope.go:117] "RemoveContainer" containerID="bc857b20d5a76fdfc21b04e16adf7eea6a40acc2c67047ecf75d9c7c06f953b6" Jan 27 22:21:32 crc kubenswrapper[4803]: I0127 22:21:32.975179 4803 scope.go:117] "RemoveContainer" containerID="12f500c3e88e10aa4f316d8bf4bc3541d87c21cdd5c55a8c87ddf0058e3b718b" Jan 27 22:21:33 crc kubenswrapper[4803]: I0127 22:21:33.004667 4803 scope.go:117] "RemoveContainer" containerID="c2b63d960a165198b02cec6dde060f46f932f9173edc2c7727d09aa301b723b1" Jan 27 22:21:33 crc kubenswrapper[4803]: I0127 22:21:33.026688 4803 scope.go:117] "RemoveContainer" containerID="f15531efad6f152f886a431d94056653f2ba603c0b35d376bb8d362002999af5" Jan 27 22:21:33 crc kubenswrapper[4803]: I0127 22:21:33.049576 4803 scope.go:117] "RemoveContainer" containerID="b6d1d8d4c02b4138192c50cd2594d3a87dbb9c73d84442af56fdca4a434b077e" Jan 27 22:21:33 crc kubenswrapper[4803]: I0127 22:21:33.071346 4803 scope.go:117] "RemoveContainer" containerID="df626e6c49acc2230001cea15abc0c70175171ca9ef46cb26823caa839335564" Jan 27 22:21:33 crc kubenswrapper[4803]: I0127 22:21:33.097215 4803 scope.go:117] "RemoveContainer" containerID="4d1ae031861d9850e3c39dddf8dcb91f3d5f2fed1da597c957ebe0614d2fc552" Jan 27 22:21:33 crc kubenswrapper[4803]: I0127 22:21:33.124250 4803 scope.go:117] "RemoveContainer" containerID="61a403642726c8c1f6d042150bd4c6470628ebb1644ab30991d1922fb1182142" Jan 27 22:21:37 crc kubenswrapper[4803]: I0127 22:21:37.043262 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-vntfr"] Jan 27 22:21:37 crc kubenswrapper[4803]: I0127 22:21:37.085077 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-vntfr"] Jan 27 22:21:38 crc kubenswrapper[4803]: I0127 22:21:38.321005 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3469063f-f2e9-46a9-bc44-bb35cf4b2149" path="/var/lib/kubelet/pods/3469063f-f2e9-46a9-bc44-bb35cf4b2149/volumes" Jan 27 22:21:40 crc kubenswrapper[4803]: I0127 22:21:40.038067 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-lsh9s"] Jan 27 22:21:40 crc kubenswrapper[4803]: I0127 22:21:40.051475 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-lsh9s"] Jan 27 22:21:40 crc kubenswrapper[4803]: I0127 22:21:40.329909 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39e2273-cd2c-4e27-9890-39cf781c7508" path="/var/lib/kubelet/pods/d39e2273-cd2c-4e27-9890-39cf781c7508/volumes" Jan 27 22:21:40 crc kubenswrapper[4803]: I0127 22:21:40.816956 4803 generic.go:334] "Generic (PLEG): container finished" podID="df3f9adb-ad8a-484b-89f7-fb1689886470" containerID="f9a8367258af03e6c28dfc7376d6fed344bc279f8fb0bdb33dd3fe7c6b7df863" exitCode=0 Jan 27 22:21:40 crc kubenswrapper[4803]: I0127 22:21:40.817006 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" event={"ID":"df3f9adb-ad8a-484b-89f7-fb1689886470","Type":"ContainerDied","Data":"f9a8367258af03e6c28dfc7376d6fed344bc279f8fb0bdb33dd3fe7c6b7df863"} Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.339752 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.460443 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-inventory\") pod \"df3f9adb-ad8a-484b-89f7-fb1689886470\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.460624 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fk9j\" (UniqueName: \"kubernetes.io/projected/df3f9adb-ad8a-484b-89f7-fb1689886470-kube-api-access-9fk9j\") pod \"df3f9adb-ad8a-484b-89f7-fb1689886470\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.460672 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-ssh-key-openstack-edpm-ipam\") pod \"df3f9adb-ad8a-484b-89f7-fb1689886470\" (UID: \"df3f9adb-ad8a-484b-89f7-fb1689886470\") " Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.466458 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3f9adb-ad8a-484b-89f7-fb1689886470-kube-api-access-9fk9j" (OuterVolumeSpecName: "kube-api-access-9fk9j") pod "df3f9adb-ad8a-484b-89f7-fb1689886470" (UID: "df3f9adb-ad8a-484b-89f7-fb1689886470"). InnerVolumeSpecName "kube-api-access-9fk9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.490956 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "df3f9adb-ad8a-484b-89f7-fb1689886470" (UID: "df3f9adb-ad8a-484b-89f7-fb1689886470"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.502474 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-inventory" (OuterVolumeSpecName: "inventory") pod "df3f9adb-ad8a-484b-89f7-fb1689886470" (UID: "df3f9adb-ad8a-484b-89f7-fb1689886470"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.564286 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.564323 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fk9j\" (UniqueName: \"kubernetes.io/projected/df3f9adb-ad8a-484b-89f7-fb1689886470-kube-api-access-9fk9j\") on node \"crc\" DevicePath \"\"" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.564335 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df3f9adb-ad8a-484b-89f7-fb1689886470-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.848705 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" event={"ID":"df3f9adb-ad8a-484b-89f7-fb1689886470","Type":"ContainerDied","Data":"acba8732204f9975b46ad03ef14076abb1215f5a3411e6900e44f06dd222397e"} Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.848781 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acba8732204f9975b46ad03ef14076abb1215f5a3411e6900e44f06dd222397e" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.848775 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fppg9" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.944041 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb"] Jan 27 22:21:42 crc kubenswrapper[4803]: E0127 22:21:42.944570 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3f9adb-ad8a-484b-89f7-fb1689886470" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.944588 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3f9adb-ad8a-484b-89f7-fb1689886470" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.944796 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="df3f9adb-ad8a-484b-89f7-fb1689886470" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.945666 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.951909 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.951982 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.951835 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.956307 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:21:42 crc kubenswrapper[4803]: I0127 22:21:42.968046 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb"] Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.093032 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.093190 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tdcd\" (UniqueName: \"kubernetes.io/projected/d08ec8ee-bdca-4f63-b951-abfbe94d188e-kube-api-access-4tdcd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.093428 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.195574 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tdcd\" (UniqueName: \"kubernetes.io/projected/d08ec8ee-bdca-4f63-b951-abfbe94d188e-kube-api-access-4tdcd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.195702 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.195802 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.201359 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.201526 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.218423 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tdcd\" (UniqueName: \"kubernetes.io/projected/d08ec8ee-bdca-4f63-b951-abfbe94d188e-kube-api-access-4tdcd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.300528 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.828182 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb"] Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.831830 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 22:21:43 crc kubenswrapper[4803]: I0127 22:21:43.861209 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" event={"ID":"d08ec8ee-bdca-4f63-b951-abfbe94d188e","Type":"ContainerStarted","Data":"bc33237860f91e02bfecf21c3943d9c8433981edef4aaaf45e3173c6e509010f"} Jan 27 22:21:44 crc kubenswrapper[4803]: I0127 22:21:44.902936 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" event={"ID":"d08ec8ee-bdca-4f63-b951-abfbe94d188e","Type":"ContainerStarted","Data":"73362a25a62f597838ae3f8a62c0bbc7991a263441b088414b63aa52610835c5"} Jan 27 22:21:44 crc kubenswrapper[4803]: I0127 22:21:44.932768 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" podStartSLOduration=2.547552709 podStartE2EDuration="2.932749033s" podCreationTimestamp="2026-01-27 22:21:42 +0000 UTC" firstStartedPulling="2026-01-27 22:21:43.831545073 +0000 UTC m=+2056.247566772" lastFinishedPulling="2026-01-27 22:21:44.216741397 +0000 UTC m=+2056.632763096" observedRunningTime="2026-01-27 22:21:44.924215513 +0000 UTC m=+2057.340237222" watchObservedRunningTime="2026-01-27 22:21:44.932749033 +0000 UTC m=+2057.348770732" Jan 27 22:22:19 crc kubenswrapper[4803]: I0127 22:22:19.066444 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-n86x7"] Jan 27 22:22:19 crc kubenswrapper[4803]: I0127 22:22:19.081835 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-29d72"] Jan 27 22:22:19 crc kubenswrapper[4803]: I0127 22:22:19.095514 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-821d-account-create-update-6bmpn"] Jan 27 22:22:19 crc kubenswrapper[4803]: I0127 22:22:19.105884 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-29d72"] Jan 27 22:22:19 crc kubenswrapper[4803]: I0127 22:22:19.116219 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-n86x7"] Jan 27 22:22:19 crc kubenswrapper[4803]: I0127 22:22:19.126124 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-821d-account-create-update-6bmpn"] Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.039440 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ffj48"] Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.056605 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9b97-account-create-update-5fk5w"] Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.070022 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ffb8-account-create-update-wvcqj"] Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.080056 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9b97-account-create-update-5fk5w"] Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.089857 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ffj48"] Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.100734 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ffb8-account-create-update-wvcqj"] Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.334731 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26f931c2-83c8-4d1a-88ff-4483d4aba42d" path="/var/lib/kubelet/pods/26f931c2-83c8-4d1a-88ff-4483d4aba42d/volumes" Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.336600 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="488bf67e-5edf-45f8-8ac9-a12e75646525" path="/var/lib/kubelet/pods/488bf67e-5edf-45f8-8ac9-a12e75646525/volumes" Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.337886 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be52911-e65b-41f4-b207-efc49bc308d9" path="/var/lib/kubelet/pods/4be52911-e65b-41f4-b207-efc49bc308d9/volumes" Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.339388 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5ddc4c-65f5-4b87-b30c-6c63031f8826" path="/var/lib/kubelet/pods/4c5ddc4c-65f5-4b87-b30c-6c63031f8826/volumes" Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.341598 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4f1dd8-79ee-4832-9474-cabab5bc72e8" path="/var/lib/kubelet/pods/7e4f1dd8-79ee-4832-9474-cabab5bc72e8/volumes" Jan 27 22:22:20 crc kubenswrapper[4803]: I0127 22:22:20.342905 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a043d332-9921-4219-9ad6-12e0cb2e31b9" path="/var/lib/kubelet/pods/a043d332-9921-4219-9ad6-12e0cb2e31b9/volumes" Jan 27 22:22:33 crc kubenswrapper[4803]: I0127 22:22:33.400873 4803 scope.go:117] "RemoveContainer" containerID="4cd409d15e23e7d7bf89c7bbb9726050a0cb13b99b70202d067dd35e9ca78630" Jan 27 22:22:33 crc kubenswrapper[4803]: I0127 22:22:33.435196 4803 scope.go:117] "RemoveContainer" containerID="3764bf033604f05d05e795ba89541aa1c4a3e0511424a0ed2f0011a122658ee1" Jan 27 22:22:33 crc kubenswrapper[4803]: I0127 22:22:33.522147 4803 scope.go:117] "RemoveContainer" containerID="c96c9e346836fe66124e1aa99deb802706dfb5b2e575bd2038955c360ff6ef4d" Jan 27 22:22:33 crc kubenswrapper[4803]: I0127 22:22:33.574306 4803 scope.go:117] "RemoveContainer" containerID="84c4c07e807f36fd4f9f69d4873a4376b7bb087eb66e27816dc1655b78159a91" Jan 27 22:22:33 crc kubenswrapper[4803]: I0127 22:22:33.640645 4803 scope.go:117] "RemoveContainer" containerID="5dd55ed1ac295f9ed4bba4166a56f138e235ff67a9faa616fbfc4e7b7718ada8" Jan 27 22:22:33 crc kubenswrapper[4803]: I0127 22:22:33.708109 4803 scope.go:117] "RemoveContainer" containerID="10a25cab970b58592891ef09277c813d9a9b8ecdf4c787ce9427938b7a8ff554" Jan 27 22:22:33 crc kubenswrapper[4803]: I0127 22:22:33.762746 4803 scope.go:117] "RemoveContainer" containerID="0f839680e2da0b2cd924d03df01469cc0586bc4689e38d9940096323678432d5" Jan 27 22:22:33 crc kubenswrapper[4803]: I0127 22:22:33.784450 4803 scope.go:117] "RemoveContainer" containerID="955ce66cce97d49691696950c099fea1511cd1d7cff02e7af258673e8b90eccc" Jan 27 22:22:52 crc kubenswrapper[4803]: I0127 22:22:52.667180 4803 generic.go:334] "Generic (PLEG): container finished" podID="d08ec8ee-bdca-4f63-b951-abfbe94d188e" containerID="73362a25a62f597838ae3f8a62c0bbc7991a263441b088414b63aa52610835c5" exitCode=0 Jan 27 22:22:52 crc kubenswrapper[4803]: I0127 22:22:52.667264 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" event={"ID":"d08ec8ee-bdca-4f63-b951-abfbe94d188e","Type":"ContainerDied","Data":"73362a25a62f597838ae3f8a62c0bbc7991a263441b088414b63aa52610835c5"} Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.470455 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.551808 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tdcd\" (UniqueName: \"kubernetes.io/projected/d08ec8ee-bdca-4f63-b951-abfbe94d188e-kube-api-access-4tdcd\") pod \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.552044 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-ssh-key-openstack-edpm-ipam\") pod \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.552104 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-inventory\") pod \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\" (UID: \"d08ec8ee-bdca-4f63-b951-abfbe94d188e\") " Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.571126 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d08ec8ee-bdca-4f63-b951-abfbe94d188e-kube-api-access-4tdcd" (OuterVolumeSpecName: "kube-api-access-4tdcd") pod "d08ec8ee-bdca-4f63-b951-abfbe94d188e" (UID: "d08ec8ee-bdca-4f63-b951-abfbe94d188e"). InnerVolumeSpecName "kube-api-access-4tdcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.589668 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d08ec8ee-bdca-4f63-b951-abfbe94d188e" (UID: "d08ec8ee-bdca-4f63-b951-abfbe94d188e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.591295 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-inventory" (OuterVolumeSpecName: "inventory") pod "d08ec8ee-bdca-4f63-b951-abfbe94d188e" (UID: "d08ec8ee-bdca-4f63-b951-abfbe94d188e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.655425 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tdcd\" (UniqueName: \"kubernetes.io/projected/d08ec8ee-bdca-4f63-b951-abfbe94d188e-kube-api-access-4tdcd\") on node \"crc\" DevicePath \"\"" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.655463 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.655474 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d08ec8ee-bdca-4f63-b951-abfbe94d188e-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.687137 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" event={"ID":"d08ec8ee-bdca-4f63-b951-abfbe94d188e","Type":"ContainerDied","Data":"bc33237860f91e02bfecf21c3943d9c8433981edef4aaaf45e3173c6e509010f"} Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.687181 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc33237860f91e02bfecf21c3943d9c8433981edef4aaaf45e3173c6e509010f" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.687214 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.789673 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk"] Jan 27 22:22:54 crc kubenswrapper[4803]: E0127 22:22:54.790363 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08ec8ee-bdca-4f63-b951-abfbe94d188e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.790387 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08ec8ee-bdca-4f63-b951-abfbe94d188e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.790705 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d08ec8ee-bdca-4f63-b951-abfbe94d188e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.791761 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.797118 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.797186 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.797376 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.797316 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.808414 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk"] Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.859693 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p7fh\" (UniqueName: \"kubernetes.io/projected/54328a1c-1655-4d76-9301-a0f71cc5c59d-kube-api-access-9p7fh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.860212 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.860428 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.962907 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.963077 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p7fh\" (UniqueName: \"kubernetes.io/projected/54328a1c-1655-4d76-9301-a0f71cc5c59d-kube-api-access-9p7fh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.963146 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.966384 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.973323 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:22:54 crc kubenswrapper[4803]: I0127 22:22:54.977966 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p7fh\" (UniqueName: \"kubernetes.io/projected/54328a1c-1655-4d76-9301-a0f71cc5c59d-kube-api-access-9p7fh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:22:55 crc kubenswrapper[4803]: I0127 22:22:55.040473 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-klms9"] Jan 27 22:22:55 crc kubenswrapper[4803]: I0127 22:22:55.051349 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-klms9"] Jan 27 22:22:55 crc kubenswrapper[4803]: I0127 22:22:55.124265 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:22:55 crc kubenswrapper[4803]: I0127 22:22:55.662518 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk"] Jan 27 22:22:55 crc kubenswrapper[4803]: W0127 22:22:55.667026 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54328a1c_1655_4d76_9301_a0f71cc5c59d.slice/crio-db27b53be33f4278f485e03c400081ba75c04364c53950aa9154b44954762557 WatchSource:0}: Error finding container db27b53be33f4278f485e03c400081ba75c04364c53950aa9154b44954762557: Status 404 returned error can't find the container with id db27b53be33f4278f485e03c400081ba75c04364c53950aa9154b44954762557 Jan 27 22:22:55 crc kubenswrapper[4803]: I0127 22:22:55.711576 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" event={"ID":"54328a1c-1655-4d76-9301-a0f71cc5c59d","Type":"ContainerStarted","Data":"db27b53be33f4278f485e03c400081ba75c04364c53950aa9154b44954762557"} Jan 27 22:22:56 crc kubenswrapper[4803]: I0127 22:22:56.321837 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00ff690-b44a-4a6e-9bf3-560344feda39" path="/var/lib/kubelet/pods/a00ff690-b44a-4a6e-9bf3-560344feda39/volumes" Jan 27 22:22:56 crc kubenswrapper[4803]: I0127 22:22:56.722951 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" event={"ID":"54328a1c-1655-4d76-9301-a0f71cc5c59d","Type":"ContainerStarted","Data":"e3c94ed897aa522f04553d1e75be0afb86999dfdf4fcfff86279aee420cbe97a"} Jan 27 22:22:56 crc kubenswrapper[4803]: I0127 22:22:56.748738 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" podStartSLOduration=2.283562845 podStartE2EDuration="2.748718441s" podCreationTimestamp="2026-01-27 22:22:54 +0000 UTC" firstStartedPulling="2026-01-27 22:22:55.669575835 +0000 UTC m=+2128.085597534" lastFinishedPulling="2026-01-27 22:22:56.134731391 +0000 UTC m=+2128.550753130" observedRunningTime="2026-01-27 22:22:56.743040408 +0000 UTC m=+2129.159062127" watchObservedRunningTime="2026-01-27 22:22:56.748718441 +0000 UTC m=+2129.164740140" Jan 27 22:23:01 crc kubenswrapper[4803]: I0127 22:23:01.793175 4803 generic.go:334] "Generic (PLEG): container finished" podID="54328a1c-1655-4d76-9301-a0f71cc5c59d" containerID="e3c94ed897aa522f04553d1e75be0afb86999dfdf4fcfff86279aee420cbe97a" exitCode=0 Jan 27 22:23:01 crc kubenswrapper[4803]: I0127 22:23:01.793273 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" event={"ID":"54328a1c-1655-4d76-9301-a0f71cc5c59d","Type":"ContainerDied","Data":"e3c94ed897aa522f04553d1e75be0afb86999dfdf4fcfff86279aee420cbe97a"} Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.386291 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.480096 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p7fh\" (UniqueName: \"kubernetes.io/projected/54328a1c-1655-4d76-9301-a0f71cc5c59d-kube-api-access-9p7fh\") pod \"54328a1c-1655-4d76-9301-a0f71cc5c59d\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.480211 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-inventory\") pod \"54328a1c-1655-4d76-9301-a0f71cc5c59d\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.480377 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-ssh-key-openstack-edpm-ipam\") pod \"54328a1c-1655-4d76-9301-a0f71cc5c59d\" (UID: \"54328a1c-1655-4d76-9301-a0f71cc5c59d\") " Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.492643 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54328a1c-1655-4d76-9301-a0f71cc5c59d-kube-api-access-9p7fh" (OuterVolumeSpecName: "kube-api-access-9p7fh") pod "54328a1c-1655-4d76-9301-a0f71cc5c59d" (UID: "54328a1c-1655-4d76-9301-a0f71cc5c59d"). InnerVolumeSpecName "kube-api-access-9p7fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.521423 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-inventory" (OuterVolumeSpecName: "inventory") pod "54328a1c-1655-4d76-9301-a0f71cc5c59d" (UID: "54328a1c-1655-4d76-9301-a0f71cc5c59d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.524326 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "54328a1c-1655-4d76-9301-a0f71cc5c59d" (UID: "54328a1c-1655-4d76-9301-a0f71cc5c59d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.582783 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.582814 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p7fh\" (UniqueName: \"kubernetes.io/projected/54328a1c-1655-4d76-9301-a0f71cc5c59d-kube-api-access-9p7fh\") on node \"crc\" DevicePath \"\"" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.582824 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54328a1c-1655-4d76-9301-a0f71cc5c59d-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.820165 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" event={"ID":"54328a1c-1655-4d76-9301-a0f71cc5c59d","Type":"ContainerDied","Data":"db27b53be33f4278f485e03c400081ba75c04364c53950aa9154b44954762557"} Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.820206 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db27b53be33f4278f485e03c400081ba75c04364c53950aa9154b44954762557" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.820207 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.898511 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b"] Jan 27 22:23:03 crc kubenswrapper[4803]: E0127 22:23:03.899047 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54328a1c-1655-4d76-9301-a0f71cc5c59d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.899065 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="54328a1c-1655-4d76-9301-a0f71cc5c59d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.899344 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="54328a1c-1655-4d76-9301-a0f71cc5c59d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.900263 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.914029 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b"] Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.914509 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.914828 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.914957 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.914978 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.991906 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7fd9b\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.991962 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7fd9b\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:03 crc kubenswrapper[4803]: I0127 22:23:03.992479 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzvqz\" (UniqueName: \"kubernetes.io/projected/321c0a06-cd6e-491b-a376-526a87eb7392-kube-api-access-wzvqz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7fd9b\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:04 crc kubenswrapper[4803]: I0127 22:23:04.095535 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzvqz\" (UniqueName: \"kubernetes.io/projected/321c0a06-cd6e-491b-a376-526a87eb7392-kube-api-access-wzvqz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7fd9b\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:04 crc kubenswrapper[4803]: I0127 22:23:04.095654 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7fd9b\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:04 crc kubenswrapper[4803]: I0127 22:23:04.095688 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7fd9b\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:04 crc kubenswrapper[4803]: I0127 22:23:04.099415 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7fd9b\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:04 crc kubenswrapper[4803]: I0127 22:23:04.107878 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7fd9b\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:04 crc kubenswrapper[4803]: I0127 22:23:04.131472 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzvqz\" (UniqueName: \"kubernetes.io/projected/321c0a06-cd6e-491b-a376-526a87eb7392-kube-api-access-wzvqz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7fd9b\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:04 crc kubenswrapper[4803]: I0127 22:23:04.234388 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:04 crc kubenswrapper[4803]: I0127 22:23:04.793083 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b"] Jan 27 22:23:04 crc kubenswrapper[4803]: I0127 22:23:04.833610 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" event={"ID":"321c0a06-cd6e-491b-a376-526a87eb7392","Type":"ContainerStarted","Data":"ec8d2bd37e9477cc1a528da4fc5de7f32ad900268b9e13d35813893b59b9bd8a"} Jan 27 22:23:05 crc kubenswrapper[4803]: I0127 22:23:05.853789 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" event={"ID":"321c0a06-cd6e-491b-a376-526a87eb7392","Type":"ContainerStarted","Data":"9b43d9e5af0f18675c13ebe840dae8f5fb47ac99d928ab4f7897b22496bc8d46"} Jan 27 22:23:05 crc kubenswrapper[4803]: I0127 22:23:05.880667 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" podStartSLOduration=2.288204408 podStartE2EDuration="2.880646532s" podCreationTimestamp="2026-01-27 22:23:03 +0000 UTC" firstStartedPulling="2026-01-27 22:23:04.807480511 +0000 UTC m=+2137.223502210" lastFinishedPulling="2026-01-27 22:23:05.399922635 +0000 UTC m=+2137.815944334" observedRunningTime="2026-01-27 22:23:05.869720088 +0000 UTC m=+2138.285741787" watchObservedRunningTime="2026-01-27 22:23:05.880646532 +0000 UTC m=+2138.296668231" Jan 27 22:23:16 crc kubenswrapper[4803]: I0127 22:23:16.344316 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:23:16 crc kubenswrapper[4803]: I0127 22:23:16.345110 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:23:22 crc kubenswrapper[4803]: I0127 22:23:22.058960 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-w8rk6"] Jan 27 22:23:22 crc kubenswrapper[4803]: I0127 22:23:22.073748 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-w8rk6"] Jan 27 22:23:22 crc kubenswrapper[4803]: I0127 22:23:22.320065 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a4597f-3096-45fc-9383-7f891d163110" path="/var/lib/kubelet/pods/45a4597f-3096-45fc-9383-7f891d163110/volumes" Jan 27 22:23:23 crc kubenswrapper[4803]: I0127 22:23:23.045444 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0eea-account-create-update-czw5l"] Jan 27 22:23:23 crc kubenswrapper[4803]: I0127 22:23:23.063342 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-trf25"] Jan 27 22:23:23 crc kubenswrapper[4803]: I0127 22:23:23.075395 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zgfkn"] Jan 27 22:23:23 crc kubenswrapper[4803]: I0127 22:23:23.090051 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-trf25"] Jan 27 22:23:23 crc kubenswrapper[4803]: I0127 22:23:23.104684 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0eea-account-create-update-czw5l"] Jan 27 22:23:23 crc kubenswrapper[4803]: I0127 22:23:23.119783 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zgfkn"] Jan 27 22:23:24 crc kubenswrapper[4803]: I0127 22:23:24.323015 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21340627-fe1d-49aa-898e-d11730736b41" path="/var/lib/kubelet/pods/21340627-fe1d-49aa-898e-d11730736b41/volumes" Jan 27 22:23:24 crc kubenswrapper[4803]: I0127 22:23:24.324155 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50cb0429-fb71-444b-8fcd-d78847af272a" path="/var/lib/kubelet/pods/50cb0429-fb71-444b-8fcd-d78847af272a/volumes" Jan 27 22:23:24 crc kubenswrapper[4803]: I0127 22:23:24.324879 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54c72732-3cce-4113-98c5-cde54f72156f" path="/var/lib/kubelet/pods/54c72732-3cce-4113-98c5-cde54f72156f/volumes" Jan 27 22:23:33 crc kubenswrapper[4803]: I0127 22:23:33.957200 4803 scope.go:117] "RemoveContainer" containerID="0e7439b3f9441dab751d360345a4e43a712ecdc1feabaff926e545f40c5b1203" Jan 27 22:23:34 crc kubenswrapper[4803]: I0127 22:23:34.016592 4803 scope.go:117] "RemoveContainer" containerID="b02de6fdab70afb12bbf5b06c8d12e44efc27f31f2264ff76e11c619a0c725e4" Jan 27 22:23:34 crc kubenswrapper[4803]: I0127 22:23:34.069516 4803 scope.go:117] "RemoveContainer" containerID="dce989b27f022c765459e624f6cc7762dc4bfc64a9afc7c86bbcf98625aae767" Jan 27 22:23:34 crc kubenswrapper[4803]: I0127 22:23:34.128478 4803 scope.go:117] "RemoveContainer" containerID="45f7e908d8f9f431a81ef47da5b52c27f94ec80deac8382e7a40d9754b781494" Jan 27 22:23:34 crc kubenswrapper[4803]: I0127 22:23:34.193910 4803 scope.go:117] "RemoveContainer" containerID="004e9c75b67a035186c66deee967d3772d96ad0a67c77cc195461f0aaa27f00c" Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.718438 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8b84m"] Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.722092 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.734298 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8b84m"] Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.809365 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjpzl\" (UniqueName: \"kubernetes.io/projected/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-kube-api-access-xjpzl\") pod \"redhat-operators-8b84m\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.809467 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-utilities\") pod \"redhat-operators-8b84m\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.809547 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-catalog-content\") pod \"redhat-operators-8b84m\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.911540 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjpzl\" (UniqueName: \"kubernetes.io/projected/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-kube-api-access-xjpzl\") pod \"redhat-operators-8b84m\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.911616 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-utilities\") pod \"redhat-operators-8b84m\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.911664 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-catalog-content\") pod \"redhat-operators-8b84m\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.912189 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-utilities\") pod \"redhat-operators-8b84m\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.912271 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-catalog-content\") pod \"redhat-operators-8b84m\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:40 crc kubenswrapper[4803]: I0127 22:23:40.942267 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjpzl\" (UniqueName: \"kubernetes.io/projected/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-kube-api-access-xjpzl\") pod \"redhat-operators-8b84m\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:41 crc kubenswrapper[4803]: I0127 22:23:41.041189 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:41 crc kubenswrapper[4803]: I0127 22:23:41.521308 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8b84m"] Jan 27 22:23:42 crc kubenswrapper[4803]: I0127 22:23:42.246062 4803 generic.go:334] "Generic (PLEG): container finished" podID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerID="6f92219315afb88efc7c8795e4f01a718a439a4ace117d939c95b4a42563420b" exitCode=0 Jan 27 22:23:42 crc kubenswrapper[4803]: I0127 22:23:42.246128 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b84m" event={"ID":"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd","Type":"ContainerDied","Data":"6f92219315afb88efc7c8795e4f01a718a439a4ace117d939c95b4a42563420b"} Jan 27 22:23:42 crc kubenswrapper[4803]: I0127 22:23:42.246367 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b84m" event={"ID":"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd","Type":"ContainerStarted","Data":"ea477714f42bab58d2b7ba2535dc972d0255c1b8f903ec386227bdbb74128dbb"} Jan 27 22:23:43 crc kubenswrapper[4803]: I0127 22:23:43.257523 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b84m" event={"ID":"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd","Type":"ContainerStarted","Data":"b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a"} Jan 27 22:23:44 crc kubenswrapper[4803]: I0127 22:23:44.290600 4803 generic.go:334] "Generic (PLEG): container finished" podID="321c0a06-cd6e-491b-a376-526a87eb7392" containerID="9b43d9e5af0f18675c13ebe840dae8f5fb47ac99d928ab4f7897b22496bc8d46" exitCode=0 Jan 27 22:23:44 crc kubenswrapper[4803]: I0127 22:23:44.290714 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" event={"ID":"321c0a06-cd6e-491b-a376-526a87eb7392","Type":"ContainerDied","Data":"9b43d9e5af0f18675c13ebe840dae8f5fb47ac99d928ab4f7897b22496bc8d46"} Jan 27 22:23:45 crc kubenswrapper[4803]: I0127 22:23:45.839022 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:45 crc kubenswrapper[4803]: I0127 22:23:45.948568 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-ssh-key-openstack-edpm-ipam\") pod \"321c0a06-cd6e-491b-a376-526a87eb7392\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " Jan 27 22:23:45 crc kubenswrapper[4803]: I0127 22:23:45.948689 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzvqz\" (UniqueName: \"kubernetes.io/projected/321c0a06-cd6e-491b-a376-526a87eb7392-kube-api-access-wzvqz\") pod \"321c0a06-cd6e-491b-a376-526a87eb7392\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " Jan 27 22:23:45 crc kubenswrapper[4803]: I0127 22:23:45.948826 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-inventory\") pod \"321c0a06-cd6e-491b-a376-526a87eb7392\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " Jan 27 22:23:45 crc kubenswrapper[4803]: I0127 22:23:45.960917 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321c0a06-cd6e-491b-a376-526a87eb7392-kube-api-access-wzvqz" (OuterVolumeSpecName: "kube-api-access-wzvqz") pod "321c0a06-cd6e-491b-a376-526a87eb7392" (UID: "321c0a06-cd6e-491b-a376-526a87eb7392"). InnerVolumeSpecName "kube-api-access-wzvqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:23:45 crc kubenswrapper[4803]: E0127 22:23:45.982059 4803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-ssh-key-openstack-edpm-ipam podName:321c0a06-cd6e-491b-a376-526a87eb7392 nodeName:}" failed. No retries permitted until 2026-01-27 22:23:46.482009423 +0000 UTC m=+2178.898031122 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-ssh-key-openstack-edpm-ipam") pod "321c0a06-cd6e-491b-a376-526a87eb7392" (UID: "321c0a06-cd6e-491b-a376-526a87eb7392") : error deleting /var/lib/kubelet/pods/321c0a06-cd6e-491b-a376-526a87eb7392/volume-subpaths: remove /var/lib/kubelet/pods/321c0a06-cd6e-491b-a376-526a87eb7392/volume-subpaths: no such file or directory Jan 27 22:23:45 crc kubenswrapper[4803]: I0127 22:23:45.986133 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-inventory" (OuterVolumeSpecName: "inventory") pod "321c0a06-cd6e-491b-a376-526a87eb7392" (UID: "321c0a06-cd6e-491b-a376-526a87eb7392"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.051581 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzvqz\" (UniqueName: \"kubernetes.io/projected/321c0a06-cd6e-491b-a376-526a87eb7392-kube-api-access-wzvqz\") on node \"crc\" DevicePath \"\"" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.051610 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.313904 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.320743 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7fd9b" event={"ID":"321c0a06-cd6e-491b-a376-526a87eb7392","Type":"ContainerDied","Data":"ec8d2bd37e9477cc1a528da4fc5de7f32ad900268b9e13d35813893b59b9bd8a"} Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.320789 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec8d2bd37e9477cc1a528da4fc5de7f32ad900268b9e13d35813893b59b9bd8a" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.343431 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.343494 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.456626 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg"] Jan 27 22:23:46 crc kubenswrapper[4803]: E0127 22:23:46.457311 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321c0a06-cd6e-491b-a376-526a87eb7392" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.457335 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="321c0a06-cd6e-491b-a376-526a87eb7392" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.457613 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="321c0a06-cd6e-491b-a376-526a87eb7392" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.458600 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.467819 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg"] Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.563358 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-ssh-key-openstack-edpm-ipam\") pod \"321c0a06-cd6e-491b-a376-526a87eb7392\" (UID: \"321c0a06-cd6e-491b-a376-526a87eb7392\") " Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.564333 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptps5\" (UniqueName: \"kubernetes.io/projected/a626642b-e30b-4c1a-bf3d-aa1b6506002a-kube-api-access-ptps5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.564572 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.564638 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.567540 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "321c0a06-cd6e-491b-a376-526a87eb7392" (UID: "321c0a06-cd6e-491b-a376-526a87eb7392"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.667635 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.667717 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.667871 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptps5\" (UniqueName: \"kubernetes.io/projected/a626642b-e30b-4c1a-bf3d-aa1b6506002a-kube-api-access-ptps5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.668042 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/321c0a06-cd6e-491b-a376-526a87eb7392-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.671554 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.671954 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.684514 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptps5\" (UniqueName: \"kubernetes.io/projected/a626642b-e30b-4c1a-bf3d-aa1b6506002a-kube-api-access-ptps5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:23:46 crc kubenswrapper[4803]: I0127 22:23:46.779198 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:23:46 crc kubenswrapper[4803]: E0127 22:23:46.881959 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod321c0a06_cd6e_491b_a376_526a87eb7392.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod321c0a06_cd6e_491b_a376_526a87eb7392.slice/crio-ec8d2bd37e9477cc1a528da4fc5de7f32ad900268b9e13d35813893b59b9bd8a\": RecentStats: unable to find data in memory cache]" Jan 27 22:23:47 crc kubenswrapper[4803]: I0127 22:23:47.385705 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg"] Jan 27 22:23:48 crc kubenswrapper[4803]: I0127 22:23:48.341688 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" event={"ID":"a626642b-e30b-4c1a-bf3d-aa1b6506002a","Type":"ContainerStarted","Data":"1e01087fa7fbc9ff2ca64f705d1baf80f48a6ffd291c6807d207902bc9704c01"} Jan 27 22:23:48 crc kubenswrapper[4803]: I0127 22:23:48.342075 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" event={"ID":"a626642b-e30b-4c1a-bf3d-aa1b6506002a","Type":"ContainerStarted","Data":"89b173b097a6f4119bd3bc47431a8ada458357be121e2ff4585d0e57737dbc48"} Jan 27 22:23:48 crc kubenswrapper[4803]: I0127 22:23:48.345617 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b84m" event={"ID":"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd","Type":"ContainerDied","Data":"b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a"} Jan 27 22:23:48 crc kubenswrapper[4803]: I0127 22:23:48.345036 4803 generic.go:334] "Generic (PLEG): container finished" podID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerID="b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a" exitCode=0 Jan 27 22:23:48 crc kubenswrapper[4803]: I0127 22:23:48.404801 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" podStartSLOduration=1.950018016 podStartE2EDuration="2.404782324s" podCreationTimestamp="2026-01-27 22:23:46 +0000 UTC" firstStartedPulling="2026-01-27 22:23:47.392594545 +0000 UTC m=+2179.808616244" lastFinishedPulling="2026-01-27 22:23:47.847358853 +0000 UTC m=+2180.263380552" observedRunningTime="2026-01-27 22:23:48.385747462 +0000 UTC m=+2180.801769161" watchObservedRunningTime="2026-01-27 22:23:48.404782324 +0000 UTC m=+2180.820804023" Jan 27 22:23:49 crc kubenswrapper[4803]: I0127 22:23:49.361991 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b84m" event={"ID":"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd","Type":"ContainerStarted","Data":"2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22"} Jan 27 22:23:49 crc kubenswrapper[4803]: I0127 22:23:49.387767 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8b84m" podStartSLOduration=2.606545414 podStartE2EDuration="9.387746978s" podCreationTimestamp="2026-01-27 22:23:40 +0000 UTC" firstStartedPulling="2026-01-27 22:23:42.248295032 +0000 UTC m=+2174.664316731" lastFinishedPulling="2026-01-27 22:23:49.029496556 +0000 UTC m=+2181.445518295" observedRunningTime="2026-01-27 22:23:49.379215298 +0000 UTC m=+2181.795237017" watchObservedRunningTime="2026-01-27 22:23:49.387746978 +0000 UTC m=+2181.803768677" Jan 27 22:23:51 crc kubenswrapper[4803]: I0127 22:23:51.042359 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:51 crc kubenswrapper[4803]: I0127 22:23:51.044099 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:23:52 crc kubenswrapper[4803]: I0127 22:23:52.091328 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8b84m" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerName="registry-server" probeResult="failure" output=< Jan 27 22:23:52 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:23:52 crc kubenswrapper[4803]: > Jan 27 22:24:02 crc kubenswrapper[4803]: I0127 22:24:02.085495 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8b84m" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerName="registry-server" probeResult="failure" output=< Jan 27 22:24:02 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:24:02 crc kubenswrapper[4803]: > Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.405413 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m6nmz"] Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.417218 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.427486 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6nmz"] Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.529418 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-catalog-content\") pod \"certified-operators-m6nmz\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.529494 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-utilities\") pod \"certified-operators-m6nmz\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.529800 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q52qz\" (UniqueName: \"kubernetes.io/projected/4aa1a179-0f02-4815-a9cd-fe467393807c-kube-api-access-q52qz\") pod \"certified-operators-m6nmz\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.632414 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-catalog-content\") pod \"certified-operators-m6nmz\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.632468 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-utilities\") pod \"certified-operators-m6nmz\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.632547 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q52qz\" (UniqueName: \"kubernetes.io/projected/4aa1a179-0f02-4815-a9cd-fe467393807c-kube-api-access-q52qz\") pod \"certified-operators-m6nmz\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.632895 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-catalog-content\") pod \"certified-operators-m6nmz\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.632910 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-utilities\") pod \"certified-operators-m6nmz\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.654002 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q52qz\" (UniqueName: \"kubernetes.io/projected/4aa1a179-0f02-4815-a9cd-fe467393807c-kube-api-access-q52qz\") pod \"certified-operators-m6nmz\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:03 crc kubenswrapper[4803]: I0127 22:24:03.748514 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:04 crc kubenswrapper[4803]: I0127 22:24:04.334575 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6nmz"] Jan 27 22:24:04 crc kubenswrapper[4803]: I0127 22:24:04.554896 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6nmz" event={"ID":"4aa1a179-0f02-4815-a9cd-fe467393807c","Type":"ContainerStarted","Data":"f0fd9e1b3e95fb640f6b13be0edecd237bb324123ffda0faf13b255c3efcd058"} Jan 27 22:24:05 crc kubenswrapper[4803]: I0127 22:24:05.564928 4803 generic.go:334] "Generic (PLEG): container finished" podID="4aa1a179-0f02-4815-a9cd-fe467393807c" containerID="ecf55a16a7be536a5b59b62814044af2716c757cfbf9098a032b12f35e375415" exitCode=0 Jan 27 22:24:05 crc kubenswrapper[4803]: I0127 22:24:05.565031 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6nmz" event={"ID":"4aa1a179-0f02-4815-a9cd-fe467393807c","Type":"ContainerDied","Data":"ecf55a16a7be536a5b59b62814044af2716c757cfbf9098a032b12f35e375415"} Jan 27 22:24:06 crc kubenswrapper[4803]: I0127 22:24:06.577740 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6nmz" event={"ID":"4aa1a179-0f02-4815-a9cd-fe467393807c","Type":"ContainerStarted","Data":"854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d"} Jan 27 22:24:08 crc kubenswrapper[4803]: I0127 22:24:08.049477 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-r9w4g"] Jan 27 22:24:08 crc kubenswrapper[4803]: I0127 22:24:08.062927 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-r9w4g"] Jan 27 22:24:08 crc kubenswrapper[4803]: I0127 22:24:08.336505 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fef152e-fc32-4940-9c38-193b933f28ad" path="/var/lib/kubelet/pods/5fef152e-fc32-4940-9c38-193b933f28ad/volumes" Jan 27 22:24:08 crc kubenswrapper[4803]: I0127 22:24:08.601491 4803 generic.go:334] "Generic (PLEG): container finished" podID="4aa1a179-0f02-4815-a9cd-fe467393807c" containerID="854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d" exitCode=0 Jan 27 22:24:08 crc kubenswrapper[4803]: I0127 22:24:08.601565 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6nmz" event={"ID":"4aa1a179-0f02-4815-a9cd-fe467393807c","Type":"ContainerDied","Data":"854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d"} Jan 27 22:24:09 crc kubenswrapper[4803]: I0127 22:24:09.617796 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6nmz" event={"ID":"4aa1a179-0f02-4815-a9cd-fe467393807c","Type":"ContainerStarted","Data":"680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d"} Jan 27 22:24:09 crc kubenswrapper[4803]: I0127 22:24:09.647537 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m6nmz" podStartSLOduration=3.081160636 podStartE2EDuration="6.647514393s" podCreationTimestamp="2026-01-27 22:24:03 +0000 UTC" firstStartedPulling="2026-01-27 22:24:05.567414141 +0000 UTC m=+2197.983435840" lastFinishedPulling="2026-01-27 22:24:09.133767898 +0000 UTC m=+2201.549789597" observedRunningTime="2026-01-27 22:24:09.633767973 +0000 UTC m=+2202.049789682" watchObservedRunningTime="2026-01-27 22:24:09.647514393 +0000 UTC m=+2202.063536092" Jan 27 22:24:12 crc kubenswrapper[4803]: I0127 22:24:12.099354 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8b84m" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerName="registry-server" probeResult="failure" output=< Jan 27 22:24:12 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:24:12 crc kubenswrapper[4803]: > Jan 27 22:24:13 crc kubenswrapper[4803]: I0127 22:24:13.750327 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:13 crc kubenswrapper[4803]: I0127 22:24:13.750443 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:13 crc kubenswrapper[4803]: I0127 22:24:13.803593 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.160766 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nf5r9"] Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.164175 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.172458 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nf5r9"] Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.282334 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-utilities\") pod \"community-operators-nf5r9\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.282429 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-catalog-content\") pod \"community-operators-nf5r9\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.282580 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j97d7\" (UniqueName: \"kubernetes.io/projected/5e742198-0241-4cac-ad5c-377834585d4f-kube-api-access-j97d7\") pod \"community-operators-nf5r9\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.385484 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j97d7\" (UniqueName: \"kubernetes.io/projected/5e742198-0241-4cac-ad5c-377834585d4f-kube-api-access-j97d7\") pod \"community-operators-nf5r9\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.385685 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-utilities\") pod \"community-operators-nf5r9\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.385773 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-catalog-content\") pod \"community-operators-nf5r9\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.386246 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-catalog-content\") pod \"community-operators-nf5r9\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.386390 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-utilities\") pod \"community-operators-nf5r9\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.423324 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j97d7\" (UniqueName: \"kubernetes.io/projected/5e742198-0241-4cac-ad5c-377834585d4f-kube-api-access-j97d7\") pod \"community-operators-nf5r9\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.481427 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:14 crc kubenswrapper[4803]: I0127 22:24:14.815119 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:15 crc kubenswrapper[4803]: I0127 22:24:15.063581 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nf5r9"] Jan 27 22:24:15 crc kubenswrapper[4803]: I0127 22:24:15.675600 4803 generic.go:334] "Generic (PLEG): container finished" podID="5e742198-0241-4cac-ad5c-377834585d4f" containerID="2ab12e5cf89606bf170bafd9a989821abafc60188a600513b312071cc82fe867" exitCode=0 Jan 27 22:24:15 crc kubenswrapper[4803]: I0127 22:24:15.675701 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nf5r9" event={"ID":"5e742198-0241-4cac-ad5c-377834585d4f","Type":"ContainerDied","Data":"2ab12e5cf89606bf170bafd9a989821abafc60188a600513b312071cc82fe867"} Jan 27 22:24:15 crc kubenswrapper[4803]: I0127 22:24:15.675971 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nf5r9" event={"ID":"5e742198-0241-4cac-ad5c-377834585d4f","Type":"ContainerStarted","Data":"b89aa81cd2e004553a25d84703aa9225219634f72e2e36e22e43cdd3b5afd8a0"} Jan 27 22:24:16 crc kubenswrapper[4803]: I0127 22:24:16.343722 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:24:16 crc kubenswrapper[4803]: I0127 22:24:16.343789 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:24:16 crc kubenswrapper[4803]: I0127 22:24:16.343860 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:24:16 crc kubenswrapper[4803]: I0127 22:24:16.344867 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:24:16 crc kubenswrapper[4803]: I0127 22:24:16.344993 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" gracePeriod=600 Jan 27 22:24:16 crc kubenswrapper[4803]: E0127 22:24:16.466671 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:24:16 crc kubenswrapper[4803]: I0127 22:24:16.708252 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" exitCode=0 Jan 27 22:24:16 crc kubenswrapper[4803]: I0127 22:24:16.708336 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff"} Jan 27 22:24:16 crc kubenswrapper[4803]: I0127 22:24:16.708371 4803 scope.go:117] "RemoveContainer" containerID="4cb856bad298c87c22d13858541beda57c61d6cafc0180491d51e1bced258716" Jan 27 22:24:16 crc kubenswrapper[4803]: I0127 22:24:16.709177 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:24:16 crc kubenswrapper[4803]: E0127 22:24:16.709472 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:24:16 crc kubenswrapper[4803]: I0127 22:24:16.716165 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nf5r9" event={"ID":"5e742198-0241-4cac-ad5c-377834585d4f","Type":"ContainerStarted","Data":"a5a40e0dc2edf4629b34534747004ffa335e1be2b6fe1d374d17c491e31feb6d"} Jan 27 22:24:18 crc kubenswrapper[4803]: I0127 22:24:18.736696 4803 generic.go:334] "Generic (PLEG): container finished" podID="5e742198-0241-4cac-ad5c-377834585d4f" containerID="a5a40e0dc2edf4629b34534747004ffa335e1be2b6fe1d374d17c491e31feb6d" exitCode=0 Jan 27 22:24:18 crc kubenswrapper[4803]: I0127 22:24:18.736760 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nf5r9" event={"ID":"5e742198-0241-4cac-ad5c-377834585d4f","Type":"ContainerDied","Data":"a5a40e0dc2edf4629b34534747004ffa335e1be2b6fe1d374d17c491e31feb6d"} Jan 27 22:24:18 crc kubenswrapper[4803]: I0127 22:24:18.951469 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6nmz"] Jan 27 22:24:18 crc kubenswrapper[4803]: I0127 22:24:18.951718 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m6nmz" podUID="4aa1a179-0f02-4815-a9cd-fe467393807c" containerName="registry-server" containerID="cri-o://680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d" gracePeriod=2 Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.487779 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.623969 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-utilities\") pod \"4aa1a179-0f02-4815-a9cd-fe467393807c\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.624048 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-catalog-content\") pod \"4aa1a179-0f02-4815-a9cd-fe467393807c\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.624133 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q52qz\" (UniqueName: \"kubernetes.io/projected/4aa1a179-0f02-4815-a9cd-fe467393807c-kube-api-access-q52qz\") pod \"4aa1a179-0f02-4815-a9cd-fe467393807c\" (UID: \"4aa1a179-0f02-4815-a9cd-fe467393807c\") " Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.627755 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-utilities" (OuterVolumeSpecName: "utilities") pod "4aa1a179-0f02-4815-a9cd-fe467393807c" (UID: "4aa1a179-0f02-4815-a9cd-fe467393807c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.645151 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa1a179-0f02-4815-a9cd-fe467393807c-kube-api-access-q52qz" (OuterVolumeSpecName: "kube-api-access-q52qz") pod "4aa1a179-0f02-4815-a9cd-fe467393807c" (UID: "4aa1a179-0f02-4815-a9cd-fe467393807c"). InnerVolumeSpecName "kube-api-access-q52qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.667992 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4aa1a179-0f02-4815-a9cd-fe467393807c" (UID: "4aa1a179-0f02-4815-a9cd-fe467393807c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.727242 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.727278 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa1a179-0f02-4815-a9cd-fe467393807c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.727289 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q52qz\" (UniqueName: \"kubernetes.io/projected/4aa1a179-0f02-4815-a9cd-fe467393807c-kube-api-access-q52qz\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.748123 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nf5r9" event={"ID":"5e742198-0241-4cac-ad5c-377834585d4f","Type":"ContainerStarted","Data":"3e43d1a669e90a23d75c7a8ad17b4adfe00a1e55dce456e95eeba5ff379ea5d4"} Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.749978 4803 generic.go:334] "Generic (PLEG): container finished" podID="4aa1a179-0f02-4815-a9cd-fe467393807c" containerID="680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d" exitCode=0 Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.750032 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6nmz" event={"ID":"4aa1a179-0f02-4815-a9cd-fe467393807c","Type":"ContainerDied","Data":"680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d"} Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.750062 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6nmz" event={"ID":"4aa1a179-0f02-4815-a9cd-fe467393807c","Type":"ContainerDied","Data":"f0fd9e1b3e95fb640f6b13be0edecd237bb324123ffda0faf13b255c3efcd058"} Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.750079 4803 scope.go:117] "RemoveContainer" containerID="680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.750195 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6nmz" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.774653 4803 scope.go:117] "RemoveContainer" containerID="854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.783864 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nf5r9" podStartSLOduration=2.1426806210000002 podStartE2EDuration="5.783826841s" podCreationTimestamp="2026-01-27 22:24:14 +0000 UTC" firstStartedPulling="2026-01-27 22:24:15.679359962 +0000 UTC m=+2208.095381661" lastFinishedPulling="2026-01-27 22:24:19.320506182 +0000 UTC m=+2211.736527881" observedRunningTime="2026-01-27 22:24:19.769209077 +0000 UTC m=+2212.185230776" watchObservedRunningTime="2026-01-27 22:24:19.783826841 +0000 UTC m=+2212.199848540" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.797104 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6nmz"] Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.804717 4803 scope.go:117] "RemoveContainer" containerID="ecf55a16a7be536a5b59b62814044af2716c757cfbf9098a032b12f35e375415" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.806650 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m6nmz"] Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.861015 4803 scope.go:117] "RemoveContainer" containerID="680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d" Jan 27 22:24:19 crc kubenswrapper[4803]: E0127 22:24:19.861529 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d\": container with ID starting with 680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d not found: ID does not exist" containerID="680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.861573 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d"} err="failed to get container status \"680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d\": rpc error: code = NotFound desc = could not find container \"680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d\": container with ID starting with 680e43425ff3c8d8d0da218e4a4b76b5c2a1ce3823a4360e6fcf68260cf42a2d not found: ID does not exist" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.861599 4803 scope.go:117] "RemoveContainer" containerID="854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d" Jan 27 22:24:19 crc kubenswrapper[4803]: E0127 22:24:19.862005 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d\": container with ID starting with 854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d not found: ID does not exist" containerID="854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.862052 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d"} err="failed to get container status \"854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d\": rpc error: code = NotFound desc = could not find container \"854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d\": container with ID starting with 854209ce3eab099d26f26e81d311b47c97165e988e82a138b91169f54890b68d not found: ID does not exist" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.862080 4803 scope.go:117] "RemoveContainer" containerID="ecf55a16a7be536a5b59b62814044af2716c757cfbf9098a032b12f35e375415" Jan 27 22:24:19 crc kubenswrapper[4803]: E0127 22:24:19.862526 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf55a16a7be536a5b59b62814044af2716c757cfbf9098a032b12f35e375415\": container with ID starting with ecf55a16a7be536a5b59b62814044af2716c757cfbf9098a032b12f35e375415 not found: ID does not exist" containerID="ecf55a16a7be536a5b59b62814044af2716c757cfbf9098a032b12f35e375415" Jan 27 22:24:19 crc kubenswrapper[4803]: I0127 22:24:19.862550 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf55a16a7be536a5b59b62814044af2716c757cfbf9098a032b12f35e375415"} err="failed to get container status \"ecf55a16a7be536a5b59b62814044af2716c757cfbf9098a032b12f35e375415\": rpc error: code = NotFound desc = could not find container \"ecf55a16a7be536a5b59b62814044af2716c757cfbf9098a032b12f35e375415\": container with ID starting with ecf55a16a7be536a5b59b62814044af2716c757cfbf9098a032b12f35e375415 not found: ID does not exist" Jan 27 22:24:20 crc kubenswrapper[4803]: I0127 22:24:20.319696 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa1a179-0f02-4815-a9cd-fe467393807c" path="/var/lib/kubelet/pods/4aa1a179-0f02-4815-a9cd-fe467393807c/volumes" Jan 27 22:24:21 crc kubenswrapper[4803]: I0127 22:24:21.087609 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:24:21 crc kubenswrapper[4803]: I0127 22:24:21.146775 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:24:21 crc kubenswrapper[4803]: I0127 22:24:21.968278 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jj64m"] Jan 27 22:24:21 crc kubenswrapper[4803]: E0127 22:24:21.968828 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa1a179-0f02-4815-a9cd-fe467393807c" containerName="registry-server" Jan 27 22:24:21 crc kubenswrapper[4803]: I0127 22:24:21.968870 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa1a179-0f02-4815-a9cd-fe467393807c" containerName="registry-server" Jan 27 22:24:21 crc kubenswrapper[4803]: E0127 22:24:21.968915 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa1a179-0f02-4815-a9cd-fe467393807c" containerName="extract-utilities" Jan 27 22:24:21 crc kubenswrapper[4803]: I0127 22:24:21.968924 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa1a179-0f02-4815-a9cd-fe467393807c" containerName="extract-utilities" Jan 27 22:24:21 crc kubenswrapper[4803]: E0127 22:24:21.968949 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa1a179-0f02-4815-a9cd-fe467393807c" containerName="extract-content" Jan 27 22:24:21 crc kubenswrapper[4803]: I0127 22:24:21.968958 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa1a179-0f02-4815-a9cd-fe467393807c" containerName="extract-content" Jan 27 22:24:21 crc kubenswrapper[4803]: I0127 22:24:21.969266 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa1a179-0f02-4815-a9cd-fe467393807c" containerName="registry-server" Jan 27 22:24:21 crc kubenswrapper[4803]: I0127 22:24:21.971625 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:21 crc kubenswrapper[4803]: I0127 22:24:21.984130 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jj64m"] Jan 27 22:24:22 crc kubenswrapper[4803]: I0127 22:24:22.086606 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xptjf\" (UniqueName: \"kubernetes.io/projected/f6a96176-abc3-495a-a4bf-609cae102346-kube-api-access-xptjf\") pod \"redhat-marketplace-jj64m\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:22 crc kubenswrapper[4803]: I0127 22:24:22.086675 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-catalog-content\") pod \"redhat-marketplace-jj64m\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:22 crc kubenswrapper[4803]: I0127 22:24:22.086747 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-utilities\") pod \"redhat-marketplace-jj64m\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:22 crc kubenswrapper[4803]: I0127 22:24:22.189310 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xptjf\" (UniqueName: \"kubernetes.io/projected/f6a96176-abc3-495a-a4bf-609cae102346-kube-api-access-xptjf\") pod \"redhat-marketplace-jj64m\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:22 crc kubenswrapper[4803]: I0127 22:24:22.189366 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-catalog-content\") pod \"redhat-marketplace-jj64m\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:22 crc kubenswrapper[4803]: I0127 22:24:22.189410 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-utilities\") pod \"redhat-marketplace-jj64m\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:22 crc kubenswrapper[4803]: I0127 22:24:22.189859 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-catalog-content\") pod \"redhat-marketplace-jj64m\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:22 crc kubenswrapper[4803]: I0127 22:24:22.190011 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-utilities\") pod \"redhat-marketplace-jj64m\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:22 crc kubenswrapper[4803]: I0127 22:24:22.207693 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xptjf\" (UniqueName: \"kubernetes.io/projected/f6a96176-abc3-495a-a4bf-609cae102346-kube-api-access-xptjf\") pod \"redhat-marketplace-jj64m\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:22 crc kubenswrapper[4803]: I0127 22:24:22.303760 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:22 crc kubenswrapper[4803]: I0127 22:24:22.820756 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jj64m"] Jan 27 22:24:23 crc kubenswrapper[4803]: I0127 22:24:23.788231 4803 generic.go:334] "Generic (PLEG): container finished" podID="f6a96176-abc3-495a-a4bf-609cae102346" containerID="475a45e70ba0b50a92b6fb9a6e1993cc0e686fe5ca556e6b03cdb217ed88e2c1" exitCode=0 Jan 27 22:24:23 crc kubenswrapper[4803]: I0127 22:24:23.788523 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj64m" event={"ID":"f6a96176-abc3-495a-a4bf-609cae102346","Type":"ContainerDied","Data":"475a45e70ba0b50a92b6fb9a6e1993cc0e686fe5ca556e6b03cdb217ed88e2c1"} Jan 27 22:24:23 crc kubenswrapper[4803]: I0127 22:24:23.788554 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj64m" event={"ID":"f6a96176-abc3-495a-a4bf-609cae102346","Type":"ContainerStarted","Data":"78ff1de12424083eff655754ddaffed40ef16d6209a71bade25a69d70f32a500"} Jan 27 22:24:24 crc kubenswrapper[4803]: I0127 22:24:24.481815 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:24 crc kubenswrapper[4803]: I0127 22:24:24.482150 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:24 crc kubenswrapper[4803]: I0127 22:24:24.531815 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:24 crc kubenswrapper[4803]: I0127 22:24:24.807028 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj64m" event={"ID":"f6a96176-abc3-495a-a4bf-609cae102346","Type":"ContainerStarted","Data":"93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133"} Jan 27 22:24:24 crc kubenswrapper[4803]: I0127 22:24:24.882260 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.149597 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8b84m"] Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.149820 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8b84m" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerName="registry-server" containerID="cri-o://2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22" gracePeriod=2 Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.659096 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.771119 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-utilities\") pod \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.771339 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-catalog-content\") pod \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.771437 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjpzl\" (UniqueName: \"kubernetes.io/projected/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-kube-api-access-xjpzl\") pod \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\" (UID: \"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd\") " Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.771705 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-utilities" (OuterVolumeSpecName: "utilities") pod "83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" (UID: "83e7e6ff-8205-4792-8af0-7cbd20aa2ebd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.772032 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.778435 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-kube-api-access-xjpzl" (OuterVolumeSpecName: "kube-api-access-xjpzl") pod "83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" (UID: "83e7e6ff-8205-4792-8af0-7cbd20aa2ebd"). InnerVolumeSpecName "kube-api-access-xjpzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.819047 4803 generic.go:334] "Generic (PLEG): container finished" podID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerID="2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22" exitCode=0 Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.820101 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b84m" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.820656 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b84m" event={"ID":"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd","Type":"ContainerDied","Data":"2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22"} Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.820686 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b84m" event={"ID":"83e7e6ff-8205-4792-8af0-7cbd20aa2ebd","Type":"ContainerDied","Data":"ea477714f42bab58d2b7ba2535dc972d0255c1b8f903ec386227bdbb74128dbb"} Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.820703 4803 scope.go:117] "RemoveContainer" containerID="2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.850263 4803 scope.go:117] "RemoveContainer" containerID="b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.875596 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjpzl\" (UniqueName: \"kubernetes.io/projected/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-kube-api-access-xjpzl\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.876077 4803 scope.go:117] "RemoveContainer" containerID="6f92219315afb88efc7c8795e4f01a718a439a4ace117d939c95b4a42563420b" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.952593 4803 scope.go:117] "RemoveContainer" containerID="2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22" Jan 27 22:24:25 crc kubenswrapper[4803]: E0127 22:24:25.953295 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22\": container with ID starting with 2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22 not found: ID does not exist" containerID="2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.953372 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22"} err="failed to get container status \"2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22\": rpc error: code = NotFound desc = could not find container \"2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22\": container with ID starting with 2934ab2418c93a5bfb2ae3bff94794d6a4810a9735288ad02f44fabc043f5f22 not found: ID does not exist" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.953406 4803 scope.go:117] "RemoveContainer" containerID="b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a" Jan 27 22:24:25 crc kubenswrapper[4803]: E0127 22:24:25.953814 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a\": container with ID starting with b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a not found: ID does not exist" containerID="b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.953872 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a"} err="failed to get container status \"b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a\": rpc error: code = NotFound desc = could not find container \"b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a\": container with ID starting with b9ffe4766589deeb9a75abd5434000581eb92108a2ab9050512fa09356900b2a not found: ID does not exist" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.953901 4803 scope.go:117] "RemoveContainer" containerID="6f92219315afb88efc7c8795e4f01a718a439a4ace117d939c95b4a42563420b" Jan 27 22:24:25 crc kubenswrapper[4803]: E0127 22:24:25.954188 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f92219315afb88efc7c8795e4f01a718a439a4ace117d939c95b4a42563420b\": container with ID starting with 6f92219315afb88efc7c8795e4f01a718a439a4ace117d939c95b4a42563420b not found: ID does not exist" containerID="6f92219315afb88efc7c8795e4f01a718a439a4ace117d939c95b4a42563420b" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.954213 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f92219315afb88efc7c8795e4f01a718a439a4ace117d939c95b4a42563420b"} err="failed to get container status \"6f92219315afb88efc7c8795e4f01a718a439a4ace117d939c95b4a42563420b\": rpc error: code = NotFound desc = could not find container \"6f92219315afb88efc7c8795e4f01a718a439a4ace117d939c95b4a42563420b\": container with ID starting with 6f92219315afb88efc7c8795e4f01a718a439a4ace117d939c95b4a42563420b not found: ID does not exist" Jan 27 22:24:25 crc kubenswrapper[4803]: I0127 22:24:25.984043 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" (UID: "83e7e6ff-8205-4792-8af0-7cbd20aa2ebd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:24:26 crc kubenswrapper[4803]: I0127 22:24:26.080741 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:26 crc kubenswrapper[4803]: I0127 22:24:26.177473 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8b84m"] Jan 27 22:24:26 crc kubenswrapper[4803]: I0127 22:24:26.192283 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8b84m"] Jan 27 22:24:26 crc kubenswrapper[4803]: I0127 22:24:26.318752 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" path="/var/lib/kubelet/pods/83e7e6ff-8205-4792-8af0-7cbd20aa2ebd/volumes" Jan 27 22:24:26 crc kubenswrapper[4803]: I0127 22:24:26.835470 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj64m" event={"ID":"f6a96176-abc3-495a-a4bf-609cae102346","Type":"ContainerDied","Data":"93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133"} Jan 27 22:24:26 crc kubenswrapper[4803]: I0127 22:24:26.835437 4803 generic.go:334] "Generic (PLEG): container finished" podID="f6a96176-abc3-495a-a4bf-609cae102346" containerID="93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133" exitCode=0 Jan 27 22:24:27 crc kubenswrapper[4803]: I0127 22:24:27.875381 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj64m" event={"ID":"f6a96176-abc3-495a-a4bf-609cae102346","Type":"ContainerStarted","Data":"a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a"} Jan 27 22:24:27 crc kubenswrapper[4803]: I0127 22:24:27.907240 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jj64m" podStartSLOduration=3.370811085 podStartE2EDuration="6.907224356s" podCreationTimestamp="2026-01-27 22:24:21 +0000 UTC" firstStartedPulling="2026-01-27 22:24:23.79056247 +0000 UTC m=+2216.206584179" lastFinishedPulling="2026-01-27 22:24:27.326975751 +0000 UTC m=+2219.742997450" observedRunningTime="2026-01-27 22:24:27.896247791 +0000 UTC m=+2220.312269500" watchObservedRunningTime="2026-01-27 22:24:27.907224356 +0000 UTC m=+2220.323246055" Jan 27 22:24:28 crc kubenswrapper[4803]: I0127 22:24:28.550186 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nf5r9"] Jan 27 22:24:28 crc kubenswrapper[4803]: I0127 22:24:28.550719 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nf5r9" podUID="5e742198-0241-4cac-ad5c-377834585d4f" containerName="registry-server" containerID="cri-o://3e43d1a669e90a23d75c7a8ad17b4adfe00a1e55dce456e95eeba5ff379ea5d4" gracePeriod=2 Jan 27 22:24:28 crc kubenswrapper[4803]: I0127 22:24:28.937592 4803 generic.go:334] "Generic (PLEG): container finished" podID="5e742198-0241-4cac-ad5c-377834585d4f" containerID="3e43d1a669e90a23d75c7a8ad17b4adfe00a1e55dce456e95eeba5ff379ea5d4" exitCode=0 Jan 27 22:24:28 crc kubenswrapper[4803]: I0127 22:24:28.938597 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nf5r9" event={"ID":"5e742198-0241-4cac-ad5c-377834585d4f","Type":"ContainerDied","Data":"3e43d1a669e90a23d75c7a8ad17b4adfe00a1e55dce456e95eeba5ff379ea5d4"} Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.186533 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.261642 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-catalog-content\") pod \"5e742198-0241-4cac-ad5c-377834585d4f\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.261934 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-utilities\") pod \"5e742198-0241-4cac-ad5c-377834585d4f\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.262026 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j97d7\" (UniqueName: \"kubernetes.io/projected/5e742198-0241-4cac-ad5c-377834585d4f-kube-api-access-j97d7\") pod \"5e742198-0241-4cac-ad5c-377834585d4f\" (UID: \"5e742198-0241-4cac-ad5c-377834585d4f\") " Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.262751 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-utilities" (OuterVolumeSpecName: "utilities") pod "5e742198-0241-4cac-ad5c-377834585d4f" (UID: "5e742198-0241-4cac-ad5c-377834585d4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.267873 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e742198-0241-4cac-ad5c-377834585d4f-kube-api-access-j97d7" (OuterVolumeSpecName: "kube-api-access-j97d7") pod "5e742198-0241-4cac-ad5c-377834585d4f" (UID: "5e742198-0241-4cac-ad5c-377834585d4f"). InnerVolumeSpecName "kube-api-access-j97d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.328701 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e742198-0241-4cac-ad5c-377834585d4f" (UID: "5e742198-0241-4cac-ad5c-377834585d4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.365066 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.365143 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j97d7\" (UniqueName: \"kubernetes.io/projected/5e742198-0241-4cac-ad5c-377834585d4f-kube-api-access-j97d7\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.365157 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e742198-0241-4cac-ad5c-377834585d4f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.953133 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nf5r9" event={"ID":"5e742198-0241-4cac-ad5c-377834585d4f","Type":"ContainerDied","Data":"b89aa81cd2e004553a25d84703aa9225219634f72e2e36e22e43cdd3b5afd8a0"} Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.953208 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nf5r9" Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.953401 4803 scope.go:117] "RemoveContainer" containerID="3e43d1a669e90a23d75c7a8ad17b4adfe00a1e55dce456e95eeba5ff379ea5d4" Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.980307 4803 scope.go:117] "RemoveContainer" containerID="a5a40e0dc2edf4629b34534747004ffa335e1be2b6fe1d374d17c491e31feb6d" Jan 27 22:24:29 crc kubenswrapper[4803]: I0127 22:24:29.992780 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nf5r9"] Jan 27 22:24:30 crc kubenswrapper[4803]: I0127 22:24:30.000810 4803 scope.go:117] "RemoveContainer" containerID="2ab12e5cf89606bf170bafd9a989821abafc60188a600513b312071cc82fe867" Jan 27 22:24:30 crc kubenswrapper[4803]: I0127 22:24:30.002147 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nf5r9"] Jan 27 22:24:30 crc kubenswrapper[4803]: I0127 22:24:30.321326 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e742198-0241-4cac-ad5c-377834585d4f" path="/var/lib/kubelet/pods/5e742198-0241-4cac-ad5c-377834585d4f/volumes" Jan 27 22:24:32 crc kubenswrapper[4803]: I0127 22:24:32.304174 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:32 crc kubenswrapper[4803]: I0127 22:24:32.304488 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:32 crc kubenswrapper[4803]: I0127 22:24:32.306349 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:24:32 crc kubenswrapper[4803]: E0127 22:24:32.306607 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:24:32 crc kubenswrapper[4803]: I0127 22:24:32.351430 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:33 crc kubenswrapper[4803]: I0127 22:24:33.038731 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:34 crc kubenswrapper[4803]: I0127 22:24:34.325566 4803 scope.go:117] "RemoveContainer" containerID="f0b199a22b85a2febb197584cc45ac7d491db5f1829cfcea3fc939e5eea3ff64" Jan 27 22:24:34 crc kubenswrapper[4803]: I0127 22:24:34.751298 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jj64m"] Jan 27 22:24:35 crc kubenswrapper[4803]: I0127 22:24:35.011943 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jj64m" podUID="f6a96176-abc3-495a-a4bf-609cae102346" containerName="registry-server" containerID="cri-o://a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a" gracePeriod=2 Jan 27 22:24:35 crc kubenswrapper[4803]: I0127 22:24:35.542302 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:35 crc kubenswrapper[4803]: I0127 22:24:35.618087 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xptjf\" (UniqueName: \"kubernetes.io/projected/f6a96176-abc3-495a-a4bf-609cae102346-kube-api-access-xptjf\") pod \"f6a96176-abc3-495a-a4bf-609cae102346\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " Jan 27 22:24:35 crc kubenswrapper[4803]: I0127 22:24:35.618384 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-catalog-content\") pod \"f6a96176-abc3-495a-a4bf-609cae102346\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " Jan 27 22:24:35 crc kubenswrapper[4803]: I0127 22:24:35.618416 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-utilities\") pod \"f6a96176-abc3-495a-a4bf-609cae102346\" (UID: \"f6a96176-abc3-495a-a4bf-609cae102346\") " Jan 27 22:24:35 crc kubenswrapper[4803]: I0127 22:24:35.620776 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-utilities" (OuterVolumeSpecName: "utilities") pod "f6a96176-abc3-495a-a4bf-609cae102346" (UID: "f6a96176-abc3-495a-a4bf-609cae102346"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:24:35 crc kubenswrapper[4803]: I0127 22:24:35.627218 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6a96176-abc3-495a-a4bf-609cae102346-kube-api-access-xptjf" (OuterVolumeSpecName: "kube-api-access-xptjf") pod "f6a96176-abc3-495a-a4bf-609cae102346" (UID: "f6a96176-abc3-495a-a4bf-609cae102346"). InnerVolumeSpecName "kube-api-access-xptjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:24:35 crc kubenswrapper[4803]: I0127 22:24:35.644930 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6a96176-abc3-495a-a4bf-609cae102346" (UID: "f6a96176-abc3-495a-a4bf-609cae102346"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:24:35 crc kubenswrapper[4803]: I0127 22:24:35.721483 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xptjf\" (UniqueName: \"kubernetes.io/projected/f6a96176-abc3-495a-a4bf-609cae102346-kube-api-access-xptjf\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:35 crc kubenswrapper[4803]: I0127 22:24:35.721521 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:35 crc kubenswrapper[4803]: I0127 22:24:35.721531 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a96176-abc3-495a-a4bf-609cae102346-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.025023 4803 generic.go:334] "Generic (PLEG): container finished" podID="f6a96176-abc3-495a-a4bf-609cae102346" containerID="a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a" exitCode=0 Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.025106 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj64m" event={"ID":"f6a96176-abc3-495a-a4bf-609cae102346","Type":"ContainerDied","Data":"a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a"} Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.025429 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj64m" event={"ID":"f6a96176-abc3-495a-a4bf-609cae102346","Type":"ContainerDied","Data":"78ff1de12424083eff655754ddaffed40ef16d6209a71bade25a69d70f32a500"} Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.025459 4803 scope.go:117] "RemoveContainer" containerID="a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a" Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.025124 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jj64m" Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.047515 4803 scope.go:117] "RemoveContainer" containerID="93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133" Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.068756 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jj64m"] Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.080363 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jj64m"] Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.092180 4803 scope.go:117] "RemoveContainer" containerID="475a45e70ba0b50a92b6fb9a6e1993cc0e686fe5ca556e6b03cdb217ed88e2c1" Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.153465 4803 scope.go:117] "RemoveContainer" containerID="a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a" Jan 27 22:24:36 crc kubenswrapper[4803]: E0127 22:24:36.154289 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a\": container with ID starting with a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a not found: ID does not exist" containerID="a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a" Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.154338 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a"} err="failed to get container status \"a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a\": rpc error: code = NotFound desc = could not find container \"a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a\": container with ID starting with a6d4a2b652182397c1ac04f066ead568082f23255ba378e9070fe627c491d22a not found: ID does not exist" Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.154366 4803 scope.go:117] "RemoveContainer" containerID="93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133" Jan 27 22:24:36 crc kubenswrapper[4803]: E0127 22:24:36.154732 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133\": container with ID starting with 93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133 not found: ID does not exist" containerID="93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133" Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.154863 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133"} err="failed to get container status \"93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133\": rpc error: code = NotFound desc = could not find container \"93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133\": container with ID starting with 93c5ff4bc4523e2ac86f4e3b8d7dfc2a9c7a546ad13b5517d1338e3d81522133 not found: ID does not exist" Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.154958 4803 scope.go:117] "RemoveContainer" containerID="475a45e70ba0b50a92b6fb9a6e1993cc0e686fe5ca556e6b03cdb217ed88e2c1" Jan 27 22:24:36 crc kubenswrapper[4803]: E0127 22:24:36.155263 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"475a45e70ba0b50a92b6fb9a6e1993cc0e686fe5ca556e6b03cdb217ed88e2c1\": container with ID starting with 475a45e70ba0b50a92b6fb9a6e1993cc0e686fe5ca556e6b03cdb217ed88e2c1 not found: ID does not exist" containerID="475a45e70ba0b50a92b6fb9a6e1993cc0e686fe5ca556e6b03cdb217ed88e2c1" Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.155293 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475a45e70ba0b50a92b6fb9a6e1993cc0e686fe5ca556e6b03cdb217ed88e2c1"} err="failed to get container status \"475a45e70ba0b50a92b6fb9a6e1993cc0e686fe5ca556e6b03cdb217ed88e2c1\": rpc error: code = NotFound desc = could not find container \"475a45e70ba0b50a92b6fb9a6e1993cc0e686fe5ca556e6b03cdb217ed88e2c1\": container with ID starting with 475a45e70ba0b50a92b6fb9a6e1993cc0e686fe5ca556e6b03cdb217ed88e2c1 not found: ID does not exist" Jan 27 22:24:36 crc kubenswrapper[4803]: I0127 22:24:36.319069 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6a96176-abc3-495a-a4bf-609cae102346" path="/var/lib/kubelet/pods/f6a96176-abc3-495a-a4bf-609cae102346/volumes" Jan 27 22:24:40 crc kubenswrapper[4803]: I0127 22:24:40.065643 4803 generic.go:334] "Generic (PLEG): container finished" podID="a626642b-e30b-4c1a-bf3d-aa1b6506002a" containerID="1e01087fa7fbc9ff2ca64f705d1baf80f48a6ffd291c6807d207902bc9704c01" exitCode=0 Jan 27 22:24:40 crc kubenswrapper[4803]: I0127 22:24:40.065764 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" event={"ID":"a626642b-e30b-4c1a-bf3d-aa1b6506002a","Type":"ContainerDied","Data":"1e01087fa7fbc9ff2ca64f705d1baf80f48a6ffd291c6807d207902bc9704c01"} Jan 27 22:24:41 crc kubenswrapper[4803]: I0127 22:24:41.572266 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:24:41 crc kubenswrapper[4803]: I0127 22:24:41.655240 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-inventory\") pod \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " Jan 27 22:24:41 crc kubenswrapper[4803]: I0127 22:24:41.655881 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptps5\" (UniqueName: \"kubernetes.io/projected/a626642b-e30b-4c1a-bf3d-aa1b6506002a-kube-api-access-ptps5\") pod \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " Jan 27 22:24:41 crc kubenswrapper[4803]: I0127 22:24:41.656071 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-ssh-key-openstack-edpm-ipam\") pod \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\" (UID: \"a626642b-e30b-4c1a-bf3d-aa1b6506002a\") " Jan 27 22:24:41 crc kubenswrapper[4803]: I0127 22:24:41.663540 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a626642b-e30b-4c1a-bf3d-aa1b6506002a-kube-api-access-ptps5" (OuterVolumeSpecName: "kube-api-access-ptps5") pod "a626642b-e30b-4c1a-bf3d-aa1b6506002a" (UID: "a626642b-e30b-4c1a-bf3d-aa1b6506002a"). InnerVolumeSpecName "kube-api-access-ptps5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:24:41 crc kubenswrapper[4803]: I0127 22:24:41.693211 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a626642b-e30b-4c1a-bf3d-aa1b6506002a" (UID: "a626642b-e30b-4c1a-bf3d-aa1b6506002a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:24:41 crc kubenswrapper[4803]: I0127 22:24:41.695359 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-inventory" (OuterVolumeSpecName: "inventory") pod "a626642b-e30b-4c1a-bf3d-aa1b6506002a" (UID: "a626642b-e30b-4c1a-bf3d-aa1b6506002a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:24:41 crc kubenswrapper[4803]: I0127 22:24:41.759372 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptps5\" (UniqueName: \"kubernetes.io/projected/a626642b-e30b-4c1a-bf3d-aa1b6506002a-kube-api-access-ptps5\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:41 crc kubenswrapper[4803]: I0127 22:24:41.759623 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:41 crc kubenswrapper[4803]: I0127 22:24:41.759698 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a626642b-e30b-4c1a-bf3d-aa1b6506002a-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.089830 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" event={"ID":"a626642b-e30b-4c1a-bf3d-aa1b6506002a","Type":"ContainerDied","Data":"89b173b097a6f4119bd3bc47431a8ada458357be121e2ff4585d0e57737dbc48"} Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.089908 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b173b097a6f4119bd3bc47431a8ada458357be121e2ff4585d0e57737dbc48" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.089921 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.202076 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-92xsq"] Jan 27 22:24:42 crc kubenswrapper[4803]: E0127 22:24:42.202815 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e742198-0241-4cac-ad5c-377834585d4f" containerName="extract-utilities" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.202901 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e742198-0241-4cac-ad5c-377834585d4f" containerName="extract-utilities" Jan 27 22:24:42 crc kubenswrapper[4803]: E0127 22:24:42.202958 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerName="extract-utilities" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.203010 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerName="extract-utilities" Jan 27 22:24:42 crc kubenswrapper[4803]: E0127 22:24:42.203069 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a626642b-e30b-4c1a-bf3d-aa1b6506002a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.203121 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a626642b-e30b-4c1a-bf3d-aa1b6506002a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:24:42 crc kubenswrapper[4803]: E0127 22:24:42.203194 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerName="extract-content" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.203241 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerName="extract-content" Jan 27 22:24:42 crc kubenswrapper[4803]: E0127 22:24:42.203316 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a96176-abc3-495a-a4bf-609cae102346" containerName="extract-utilities" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.203364 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a96176-abc3-495a-a4bf-609cae102346" containerName="extract-utilities" Jan 27 22:24:42 crc kubenswrapper[4803]: E0127 22:24:42.203423 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerName="registry-server" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.203472 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerName="registry-server" Jan 27 22:24:42 crc kubenswrapper[4803]: E0127 22:24:42.203520 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a96176-abc3-495a-a4bf-609cae102346" containerName="extract-content" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.203569 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a96176-abc3-495a-a4bf-609cae102346" containerName="extract-content" Jan 27 22:24:42 crc kubenswrapper[4803]: E0127 22:24:42.203624 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e742198-0241-4cac-ad5c-377834585d4f" containerName="registry-server" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.203670 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e742198-0241-4cac-ad5c-377834585d4f" containerName="registry-server" Jan 27 22:24:42 crc kubenswrapper[4803]: E0127 22:24:42.203715 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a96176-abc3-495a-a4bf-609cae102346" containerName="registry-server" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.203763 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a96176-abc3-495a-a4bf-609cae102346" containerName="registry-server" Jan 27 22:24:42 crc kubenswrapper[4803]: E0127 22:24:42.203823 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e742198-0241-4cac-ad5c-377834585d4f" containerName="extract-content" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.203895 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e742198-0241-4cac-ad5c-377834585d4f" containerName="extract-content" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.204162 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e742198-0241-4cac-ad5c-377834585d4f" containerName="registry-server" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.204239 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a626642b-e30b-4c1a-bf3d-aa1b6506002a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.204298 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6a96176-abc3-495a-a4bf-609cae102346" containerName="registry-server" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.204361 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e7e6ff-8205-4792-8af0-7cbd20aa2ebd" containerName="registry-server" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.205333 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.209227 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.209331 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.209331 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.209385 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.213206 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-92xsq"] Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.272232 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-92xsq\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.272557 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhq6p\" (UniqueName: \"kubernetes.io/projected/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-kube-api-access-qhq6p\") pod \"ssh-known-hosts-edpm-deployment-92xsq\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.272756 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-92xsq\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.374589 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-92xsq\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.374713 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-92xsq\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.374753 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhq6p\" (UniqueName: \"kubernetes.io/projected/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-kube-api-access-qhq6p\") pod \"ssh-known-hosts-edpm-deployment-92xsq\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.381081 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-92xsq\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.388815 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-92xsq\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.394154 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhq6p\" (UniqueName: \"kubernetes.io/projected/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-kube-api-access-qhq6p\") pod \"ssh-known-hosts-edpm-deployment-92xsq\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:42 crc kubenswrapper[4803]: I0127 22:24:42.561429 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:43 crc kubenswrapper[4803]: I0127 22:24:43.346823 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-92xsq"] Jan 27 22:24:44 crc kubenswrapper[4803]: I0127 22:24:44.119268 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" event={"ID":"2077afa2-d0de-4ed0-ad3d-289cba1c27a5","Type":"ContainerStarted","Data":"376d6addf2a99d75015a79e8a318b9434bf6aea5abd7469235e179249c7e1e90"} Jan 27 22:24:45 crc kubenswrapper[4803]: I0127 22:24:45.132227 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" event={"ID":"2077afa2-d0de-4ed0-ad3d-289cba1c27a5","Type":"ContainerStarted","Data":"fe70b8518723b5f3a972fb1d7668bd4f806a09bdd5bfa197f7af8d4b543125ea"} Jan 27 22:24:45 crc kubenswrapper[4803]: I0127 22:24:45.147814 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" podStartSLOduration=2.594204382 podStartE2EDuration="3.147802101s" podCreationTimestamp="2026-01-27 22:24:42 +0000 UTC" firstStartedPulling="2026-01-27 22:24:43.348066196 +0000 UTC m=+2235.764087895" lastFinishedPulling="2026-01-27 22:24:43.901663915 +0000 UTC m=+2236.317685614" observedRunningTime="2026-01-27 22:24:45.146463634 +0000 UTC m=+2237.562485333" watchObservedRunningTime="2026-01-27 22:24:45.147802101 +0000 UTC m=+2237.563823800" Jan 27 22:24:47 crc kubenswrapper[4803]: I0127 22:24:47.306814 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:24:47 crc kubenswrapper[4803]: E0127 22:24:47.307550 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:24:52 crc kubenswrapper[4803]: I0127 22:24:52.253042 4803 generic.go:334] "Generic (PLEG): container finished" podID="2077afa2-d0de-4ed0-ad3d-289cba1c27a5" containerID="fe70b8518723b5f3a972fb1d7668bd4f806a09bdd5bfa197f7af8d4b543125ea" exitCode=0 Jan 27 22:24:52 crc kubenswrapper[4803]: I0127 22:24:52.253132 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" event={"ID":"2077afa2-d0de-4ed0-ad3d-289cba1c27a5","Type":"ContainerDied","Data":"fe70b8518723b5f3a972fb1d7668bd4f806a09bdd5bfa197f7af8d4b543125ea"} Jan 27 22:24:53 crc kubenswrapper[4803]: I0127 22:24:53.733094 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:53 crc kubenswrapper[4803]: I0127 22:24:53.781692 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-ssh-key-openstack-edpm-ipam\") pod \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " Jan 27 22:24:53 crc kubenswrapper[4803]: I0127 22:24:53.782022 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-inventory-0\") pod \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " Jan 27 22:24:53 crc kubenswrapper[4803]: I0127 22:24:53.782119 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhq6p\" (UniqueName: \"kubernetes.io/projected/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-kube-api-access-qhq6p\") pod \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\" (UID: \"2077afa2-d0de-4ed0-ad3d-289cba1c27a5\") " Jan 27 22:24:53 crc kubenswrapper[4803]: I0127 22:24:53.798183 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-kube-api-access-qhq6p" (OuterVolumeSpecName: "kube-api-access-qhq6p") pod "2077afa2-d0de-4ed0-ad3d-289cba1c27a5" (UID: "2077afa2-d0de-4ed0-ad3d-289cba1c27a5"). InnerVolumeSpecName "kube-api-access-qhq6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:24:53 crc kubenswrapper[4803]: I0127 22:24:53.822950 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2077afa2-d0de-4ed0-ad3d-289cba1c27a5" (UID: "2077afa2-d0de-4ed0-ad3d-289cba1c27a5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:24:53 crc kubenswrapper[4803]: I0127 22:24:53.823466 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2077afa2-d0de-4ed0-ad3d-289cba1c27a5" (UID: "2077afa2-d0de-4ed0-ad3d-289cba1c27a5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:24:53 crc kubenswrapper[4803]: I0127 22:24:53.884653 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhq6p\" (UniqueName: \"kubernetes.io/projected/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-kube-api-access-qhq6p\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:53 crc kubenswrapper[4803]: I0127 22:24:53.884697 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:53 crc kubenswrapper[4803]: I0127 22:24:53.884709 4803 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2077afa2-d0de-4ed0-ad3d-289cba1c27a5-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.272190 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" event={"ID":"2077afa2-d0de-4ed0-ad3d-289cba1c27a5","Type":"ContainerDied","Data":"376d6addf2a99d75015a79e8a318b9434bf6aea5abd7469235e179249c7e1e90"} Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.272457 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="376d6addf2a99d75015a79e8a318b9434bf6aea5abd7469235e179249c7e1e90" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.272216 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-92xsq" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.346274 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7"] Jan 27 22:24:54 crc kubenswrapper[4803]: E0127 22:24:54.346909 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2077afa2-d0de-4ed0-ad3d-289cba1c27a5" containerName="ssh-known-hosts-edpm-deployment" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.346970 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="2077afa2-d0de-4ed0-ad3d-289cba1c27a5" containerName="ssh-known-hosts-edpm-deployment" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.347269 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="2077afa2-d0de-4ed0-ad3d-289cba1c27a5" containerName="ssh-known-hosts-edpm-deployment" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.348501 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.350534 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.350773 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.351044 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.351216 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.399886 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7"] Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.499040 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfnk\" (UniqueName: \"kubernetes.io/projected/121278dd-a3d1-4108-8a1a-2995e0ec2517-kube-api-access-4qfnk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fbbk7\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.499157 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fbbk7\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.499248 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fbbk7\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.601091 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qfnk\" (UniqueName: \"kubernetes.io/projected/121278dd-a3d1-4108-8a1a-2995e0ec2517-kube-api-access-4qfnk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fbbk7\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.601197 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fbbk7\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.601288 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fbbk7\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.608367 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fbbk7\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.609406 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fbbk7\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.617423 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qfnk\" (UniqueName: \"kubernetes.io/projected/121278dd-a3d1-4108-8a1a-2995e0ec2517-kube-api-access-4qfnk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fbbk7\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:24:54 crc kubenswrapper[4803]: I0127 22:24:54.672830 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:24:55 crc kubenswrapper[4803]: I0127 22:24:55.207804 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7"] Jan 27 22:24:55 crc kubenswrapper[4803]: W0127 22:24:55.215891 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod121278dd_a3d1_4108_8a1a_2995e0ec2517.slice/crio-a8e5bb9f7ab7cd4f2b8366f846a6d2781fbe9770a414303c336bc146bc3408b6 WatchSource:0}: Error finding container a8e5bb9f7ab7cd4f2b8366f846a6d2781fbe9770a414303c336bc146bc3408b6: Status 404 returned error can't find the container with id a8e5bb9f7ab7cd4f2b8366f846a6d2781fbe9770a414303c336bc146bc3408b6 Jan 27 22:24:55 crc kubenswrapper[4803]: I0127 22:24:55.307299 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" event={"ID":"121278dd-a3d1-4108-8a1a-2995e0ec2517","Type":"ContainerStarted","Data":"a8e5bb9f7ab7cd4f2b8366f846a6d2781fbe9770a414303c336bc146bc3408b6"} Jan 27 22:24:57 crc kubenswrapper[4803]: I0127 22:24:57.338871 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" event={"ID":"121278dd-a3d1-4108-8a1a-2995e0ec2517","Type":"ContainerStarted","Data":"feb2857a4bab84e7f9ef36f7c5bfd200ac23c4e21015aa9b5ab337d26bbc9d36"} Jan 27 22:24:57 crc kubenswrapper[4803]: I0127 22:24:57.359925 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" podStartSLOduration=1.852250727 podStartE2EDuration="3.359909341s" podCreationTimestamp="2026-01-27 22:24:54 +0000 UTC" firstStartedPulling="2026-01-27 22:24:55.219051167 +0000 UTC m=+2247.635072866" lastFinishedPulling="2026-01-27 22:24:56.726709781 +0000 UTC m=+2249.142731480" observedRunningTime="2026-01-27 22:24:57.3561431 +0000 UTC m=+2249.772164799" watchObservedRunningTime="2026-01-27 22:24:57.359909341 +0000 UTC m=+2249.775931030" Jan 27 22:24:59 crc kubenswrapper[4803]: I0127 22:24:59.307171 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:24:59 crc kubenswrapper[4803]: E0127 22:24:59.308153 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:25:05 crc kubenswrapper[4803]: I0127 22:25:05.418953 4803 generic.go:334] "Generic (PLEG): container finished" podID="121278dd-a3d1-4108-8a1a-2995e0ec2517" containerID="feb2857a4bab84e7f9ef36f7c5bfd200ac23c4e21015aa9b5ab337d26bbc9d36" exitCode=0 Jan 27 22:25:05 crc kubenswrapper[4803]: I0127 22:25:05.419037 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" event={"ID":"121278dd-a3d1-4108-8a1a-2995e0ec2517","Type":"ContainerDied","Data":"feb2857a4bab84e7f9ef36f7c5bfd200ac23c4e21015aa9b5ab337d26bbc9d36"} Jan 27 22:25:06 crc kubenswrapper[4803]: I0127 22:25:06.918721 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.017830 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-inventory\") pod \"121278dd-a3d1-4108-8a1a-2995e0ec2517\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.018062 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qfnk\" (UniqueName: \"kubernetes.io/projected/121278dd-a3d1-4108-8a1a-2995e0ec2517-kube-api-access-4qfnk\") pod \"121278dd-a3d1-4108-8a1a-2995e0ec2517\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.018086 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-ssh-key-openstack-edpm-ipam\") pod \"121278dd-a3d1-4108-8a1a-2995e0ec2517\" (UID: \"121278dd-a3d1-4108-8a1a-2995e0ec2517\") " Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.026984 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/121278dd-a3d1-4108-8a1a-2995e0ec2517-kube-api-access-4qfnk" (OuterVolumeSpecName: "kube-api-access-4qfnk") pod "121278dd-a3d1-4108-8a1a-2995e0ec2517" (UID: "121278dd-a3d1-4108-8a1a-2995e0ec2517"). InnerVolumeSpecName "kube-api-access-4qfnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.060179 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "121278dd-a3d1-4108-8a1a-2995e0ec2517" (UID: "121278dd-a3d1-4108-8a1a-2995e0ec2517"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.063348 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-inventory" (OuterVolumeSpecName: "inventory") pod "121278dd-a3d1-4108-8a1a-2995e0ec2517" (UID: "121278dd-a3d1-4108-8a1a-2995e0ec2517"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.120984 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qfnk\" (UniqueName: \"kubernetes.io/projected/121278dd-a3d1-4108-8a1a-2995e0ec2517-kube-api-access-4qfnk\") on node \"crc\" DevicePath \"\"" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.121027 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.121041 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/121278dd-a3d1-4108-8a1a-2995e0ec2517-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.444142 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" event={"ID":"121278dd-a3d1-4108-8a1a-2995e0ec2517","Type":"ContainerDied","Data":"a8e5bb9f7ab7cd4f2b8366f846a6d2781fbe9770a414303c336bc146bc3408b6"} Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.444180 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fbbk7" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.444186 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8e5bb9f7ab7cd4f2b8366f846a6d2781fbe9770a414303c336bc146bc3408b6" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.525528 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d"] Jan 27 22:25:07 crc kubenswrapper[4803]: E0127 22:25:07.527990 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121278dd-a3d1-4108-8a1a-2995e0ec2517" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.528029 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="121278dd-a3d1-4108-8a1a-2995e0ec2517" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.528569 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="121278dd-a3d1-4108-8a1a-2995e0ec2517" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.530007 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.532804 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.532871 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.532960 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.542568 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.542283 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d"] Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.633480 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.633750 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.633936 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r8sl\" (UniqueName: \"kubernetes.io/projected/b8ff2541-3983-461b-bbf6-20c732f107f0-kube-api-access-6r8sl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.736533 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r8sl\" (UniqueName: \"kubernetes.io/projected/b8ff2541-3983-461b-bbf6-20c732f107f0-kube-api-access-6r8sl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.736929 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.737103 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.740524 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.741425 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.755694 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r8sl\" (UniqueName: \"kubernetes.io/projected/b8ff2541-3983-461b-bbf6-20c732f107f0-kube-api-access-6r8sl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:07 crc kubenswrapper[4803]: I0127 22:25:07.847446 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:08 crc kubenswrapper[4803]: I0127 22:25:08.413616 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d"] Jan 27 22:25:08 crc kubenswrapper[4803]: I0127 22:25:08.458822 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" event={"ID":"b8ff2541-3983-461b-bbf6-20c732f107f0","Type":"ContainerStarted","Data":"1a0d11d89d8babb27d01c7f4a8899bc58a65ca38cc73a270844d536dbbdd5089"} Jan 27 22:25:09 crc kubenswrapper[4803]: I0127 22:25:09.471419 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" event={"ID":"b8ff2541-3983-461b-bbf6-20c732f107f0","Type":"ContainerStarted","Data":"165b8c9689dd96743c130a8bc11dbe0b157beeb310c493b0aace658b6df61544"} Jan 27 22:25:09 crc kubenswrapper[4803]: I0127 22:25:09.489196 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" podStartSLOduration=1.9872862850000002 podStartE2EDuration="2.489176203s" podCreationTimestamp="2026-01-27 22:25:07 +0000 UTC" firstStartedPulling="2026-01-27 22:25:08.415566029 +0000 UTC m=+2260.831587728" lastFinishedPulling="2026-01-27 22:25:08.917455947 +0000 UTC m=+2261.333477646" observedRunningTime="2026-01-27 22:25:09.483684965 +0000 UTC m=+2261.899706664" watchObservedRunningTime="2026-01-27 22:25:09.489176203 +0000 UTC m=+2261.905197902" Jan 27 22:25:10 crc kubenswrapper[4803]: I0127 22:25:10.307605 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:25:10 crc kubenswrapper[4803]: E0127 22:25:10.308399 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:25:19 crc kubenswrapper[4803]: I0127 22:25:19.592355 4803 generic.go:334] "Generic (PLEG): container finished" podID="b8ff2541-3983-461b-bbf6-20c732f107f0" containerID="165b8c9689dd96743c130a8bc11dbe0b157beeb310c493b0aace658b6df61544" exitCode=0 Jan 27 22:25:19 crc kubenswrapper[4803]: I0127 22:25:19.592453 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" event={"ID":"b8ff2541-3983-461b-bbf6-20c732f107f0","Type":"ContainerDied","Data":"165b8c9689dd96743c130a8bc11dbe0b157beeb310c493b0aace658b6df61544"} Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.054124 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.102087 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-ssh-key-openstack-edpm-ipam\") pod \"b8ff2541-3983-461b-bbf6-20c732f107f0\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.102271 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-inventory\") pod \"b8ff2541-3983-461b-bbf6-20c732f107f0\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.102457 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r8sl\" (UniqueName: \"kubernetes.io/projected/b8ff2541-3983-461b-bbf6-20c732f107f0-kube-api-access-6r8sl\") pod \"b8ff2541-3983-461b-bbf6-20c732f107f0\" (UID: \"b8ff2541-3983-461b-bbf6-20c732f107f0\") " Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.107225 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ff2541-3983-461b-bbf6-20c732f107f0-kube-api-access-6r8sl" (OuterVolumeSpecName: "kube-api-access-6r8sl") pod "b8ff2541-3983-461b-bbf6-20c732f107f0" (UID: "b8ff2541-3983-461b-bbf6-20c732f107f0"). InnerVolumeSpecName "kube-api-access-6r8sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.136221 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b8ff2541-3983-461b-bbf6-20c732f107f0" (UID: "b8ff2541-3983-461b-bbf6-20c732f107f0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.156084 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-inventory" (OuterVolumeSpecName: "inventory") pod "b8ff2541-3983-461b-bbf6-20c732f107f0" (UID: "b8ff2541-3983-461b-bbf6-20c732f107f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.205468 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r8sl\" (UniqueName: \"kubernetes.io/projected/b8ff2541-3983-461b-bbf6-20c732f107f0-kube-api-access-6r8sl\") on node \"crc\" DevicePath \"\"" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.205498 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.205509 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8ff2541-3983-461b-bbf6-20c732f107f0-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.612977 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" event={"ID":"b8ff2541-3983-461b-bbf6-20c732f107f0","Type":"ContainerDied","Data":"1a0d11d89d8babb27d01c7f4a8899bc58a65ca38cc73a270844d536dbbdd5089"} Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.613264 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a0d11d89d8babb27d01c7f4a8899bc58a65ca38cc73a270844d536dbbdd5089" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.613070 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.707905 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q"] Jan 27 22:25:21 crc kubenswrapper[4803]: E0127 22:25:21.708536 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ff2541-3983-461b-bbf6-20c732f107f0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.708581 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ff2541-3983-461b-bbf6-20c732f107f0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.708902 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ff2541-3983-461b-bbf6-20c732f107f0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.710038 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.713127 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.713221 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.713306 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.713415 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.713485 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.714028 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.714238 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.717786 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.718396 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.723696 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q"] Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818314 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818364 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818406 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818437 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818476 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818563 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818584 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5fv8\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-kube-api-access-j5fv8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818606 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818634 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818679 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818701 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818739 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818779 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818821 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818839 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.818884 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.920362 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.920423 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.920495 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921179 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921230 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921286 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921322 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921345 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921407 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921423 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5fv8\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-kube-api-access-j5fv8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921450 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921475 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921513 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921531 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921576 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.921628 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.924796 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.925392 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.927379 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.934400 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.935193 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.935508 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.935720 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.936072 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.936875 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.937227 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.937822 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.938253 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.940148 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.942991 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.950442 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:21 crc kubenswrapper[4803]: I0127 22:25:21.950796 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5fv8\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-kube-api-access-j5fv8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:22 crc kubenswrapper[4803]: I0127 22:25:22.026165 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:25:22 crc kubenswrapper[4803]: W0127 22:25:22.622593 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0384ac7e_8b90_4801_85ee_ed8323cc2d73.slice/crio-568d13511b935faed7b2cbce475bea69e2002ac60b742c5aee1d73a0761853bb WatchSource:0}: Error finding container 568d13511b935faed7b2cbce475bea69e2002ac60b742c5aee1d73a0761853bb: Status 404 returned error can't find the container with id 568d13511b935faed7b2cbce475bea69e2002ac60b742c5aee1d73a0761853bb Jan 27 22:25:22 crc kubenswrapper[4803]: I0127 22:25:22.623243 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q"] Jan 27 22:25:23 crc kubenswrapper[4803]: I0127 22:25:23.655421 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" event={"ID":"0384ac7e-8b90-4801-85ee-ed8323cc2d73","Type":"ContainerStarted","Data":"712a8fc3e09e618fe97c5a4474d04648646cc21cf6e6becbfd529dd3e1313a0a"} Jan 27 22:25:23 crc kubenswrapper[4803]: I0127 22:25:23.656133 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" event={"ID":"0384ac7e-8b90-4801-85ee-ed8323cc2d73","Type":"ContainerStarted","Data":"568d13511b935faed7b2cbce475bea69e2002ac60b742c5aee1d73a0761853bb"} Jan 27 22:25:23 crc kubenswrapper[4803]: I0127 22:25:23.682178 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" podStartSLOduration=2.255822267 podStartE2EDuration="2.68215584s" podCreationTimestamp="2026-01-27 22:25:21 +0000 UTC" firstStartedPulling="2026-01-27 22:25:22.625601767 +0000 UTC m=+2275.041623466" lastFinishedPulling="2026-01-27 22:25:23.05193534 +0000 UTC m=+2275.467957039" observedRunningTime="2026-01-27 22:25:23.67357311 +0000 UTC m=+2276.089594829" watchObservedRunningTime="2026-01-27 22:25:23.68215584 +0000 UTC m=+2276.098177529" Jan 27 22:25:24 crc kubenswrapper[4803]: I0127 22:25:24.307037 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:25:24 crc kubenswrapper[4803]: E0127 22:25:24.307589 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:25:28 crc kubenswrapper[4803]: I0127 22:25:28.040262 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-sjbk6"] Jan 27 22:25:28 crc kubenswrapper[4803]: I0127 22:25:28.050882 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-sjbk6"] Jan 27 22:25:28 crc kubenswrapper[4803]: I0127 22:25:28.332863 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfdeec7e-e323-4a7a-9a5c-badcec773861" path="/var/lib/kubelet/pods/dfdeec7e-e323-4a7a-9a5c-badcec773861/volumes" Jan 27 22:25:34 crc kubenswrapper[4803]: I0127 22:25:34.461964 4803 scope.go:117] "RemoveContainer" containerID="4e8904f633efb98534f8bc13cebf0c884236f320cec56539f3081c3775e0f2e6" Jan 27 22:25:37 crc kubenswrapper[4803]: I0127 22:25:37.308031 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:25:37 crc kubenswrapper[4803]: E0127 22:25:37.308884 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:25:52 crc kubenswrapper[4803]: I0127 22:25:52.306830 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:25:52 crc kubenswrapper[4803]: E0127 22:25:52.307695 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:26:05 crc kubenswrapper[4803]: I0127 22:26:05.307026 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:26:05 crc kubenswrapper[4803]: E0127 22:26:05.307829 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:26:06 crc kubenswrapper[4803]: I0127 22:26:06.123432 4803 generic.go:334] "Generic (PLEG): container finished" podID="0384ac7e-8b90-4801-85ee-ed8323cc2d73" containerID="712a8fc3e09e618fe97c5a4474d04648646cc21cf6e6becbfd529dd3e1313a0a" exitCode=0 Jan 27 22:26:06 crc kubenswrapper[4803]: I0127 22:26:06.123479 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" event={"ID":"0384ac7e-8b90-4801-85ee-ed8323cc2d73","Type":"ContainerDied","Data":"712a8fc3e09e618fe97c5a4474d04648646cc21cf6e6becbfd529dd3e1313a0a"} Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.598973 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.678341 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.678807 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ssh-key-openstack-edpm-ipam\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.678834 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-combined-ca-bundle\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.678896 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-inventory\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.678973 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-nova-combined-ca-bundle\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.679031 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-repo-setup-combined-ca-bundle\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.679070 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.679104 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-libvirt-combined-ca-bundle\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.679127 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5fv8\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-kube-api-access-j5fv8\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.679169 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-bootstrap-combined-ca-bundle\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.679237 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ovn-combined-ca-bundle\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.679267 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-power-monitoring-combined-ca-bundle\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.679317 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.679344 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-neutron-metadata-combined-ca-bundle\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.679407 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.679498 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-ovn-default-certs-0\") pod \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\" (UID: \"0384ac7e-8b90-4801-85ee-ed8323cc2d73\") " Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.685413 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.685520 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.686016 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.686775 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.687114 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.688570 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.689184 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.690843 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.691063 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-kube-api-access-j5fv8" (OuterVolumeSpecName: "kube-api-access-j5fv8") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "kube-api-access-j5fv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.691796 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.692549 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.696612 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.697505 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.701488 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.718508 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.720698 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-inventory" (OuterVolumeSpecName: "inventory") pod "0384ac7e-8b90-4801-85ee-ed8323cc2d73" (UID: "0384ac7e-8b90-4801-85ee-ed8323cc2d73"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782120 4803 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782154 4803 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782168 4803 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782177 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782186 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782197 4803 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782206 4803 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782216 4803 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782228 4803 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782238 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5fv8\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-kube-api-access-j5fv8\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782247 4803 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782255 4803 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782264 4803 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782274 4803 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782303 4803 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0384ac7e-8b90-4801-85ee-ed8323cc2d73-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:07 crc kubenswrapper[4803]: I0127 22:26:07.782312 4803 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0384ac7e-8b90-4801-85ee-ed8323cc2d73-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.151537 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" event={"ID":"0384ac7e-8b90-4801-85ee-ed8323cc2d73","Type":"ContainerDied","Data":"568d13511b935faed7b2cbce475bea69e2002ac60b742c5aee1d73a0761853bb"} Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.151624 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="568d13511b935faed7b2cbce475bea69e2002ac60b742c5aee1d73a0761853bb" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.151645 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.304644 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k"] Jan 27 22:26:08 crc kubenswrapper[4803]: E0127 22:26:08.305510 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0384ac7e-8b90-4801-85ee-ed8323cc2d73" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.305543 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0384ac7e-8b90-4801-85ee-ed8323cc2d73" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.305934 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="0384ac7e-8b90-4801-85ee-ed8323cc2d73" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.307695 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.317865 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.318260 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.319295 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.320255 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.330588 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.343645 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k"] Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.402628 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.402715 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.403297 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.403403 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmc4n\" (UniqueName: \"kubernetes.io/projected/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-kube-api-access-vmc4n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.403809 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.506364 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.506497 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.506527 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmc4n\" (UniqueName: \"kubernetes.io/projected/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-kube-api-access-vmc4n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.506615 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.506735 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.509316 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.514232 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.515867 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.526440 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.539705 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmc4n\" (UniqueName: \"kubernetes.io/projected/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-kube-api-access-vmc4n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nqn7k\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:08 crc kubenswrapper[4803]: I0127 22:26:08.646293 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:26:09 crc kubenswrapper[4803]: I0127 22:26:09.052380 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-hrwh7"] Jan 27 22:26:09 crc kubenswrapper[4803]: I0127 22:26:09.063411 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-hrwh7"] Jan 27 22:26:09 crc kubenswrapper[4803]: I0127 22:26:09.325129 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k"] Jan 27 22:26:10 crc kubenswrapper[4803]: I0127 22:26:10.173384 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" event={"ID":"a0677e52-1a37-44b2-9627-6cb40b6d6f6d","Type":"ContainerStarted","Data":"bc20c234a38d23a61ca4615520543386c47a5149028929d756b1fd1e59e0ce37"} Jan 27 22:26:10 crc kubenswrapper[4803]: I0127 22:26:10.174142 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" event={"ID":"a0677e52-1a37-44b2-9627-6cb40b6d6f6d","Type":"ContainerStarted","Data":"be8f44e4a8ab5801d588c6cd03ae04559f2f8f197769f72facd534e272bddba3"} Jan 27 22:26:10 crc kubenswrapper[4803]: I0127 22:26:10.201792 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" podStartSLOduration=1.725759258 podStartE2EDuration="2.201774099s" podCreationTimestamp="2026-01-27 22:26:08 +0000 UTC" firstStartedPulling="2026-01-27 22:26:09.338053715 +0000 UTC m=+2321.754075404" lastFinishedPulling="2026-01-27 22:26:09.814068546 +0000 UTC m=+2322.230090245" observedRunningTime="2026-01-27 22:26:10.189472258 +0000 UTC m=+2322.605493967" watchObservedRunningTime="2026-01-27 22:26:10.201774099 +0000 UTC m=+2322.617795798" Jan 27 22:26:10 crc kubenswrapper[4803]: I0127 22:26:10.322636 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6886b51d-5eac-48bf-9a10-98a0b8a8d051" path="/var/lib/kubelet/pods/6886b51d-5eac-48bf-9a10-98a0b8a8d051/volumes" Jan 27 22:26:20 crc kubenswrapper[4803]: I0127 22:26:20.308005 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:26:20 crc kubenswrapper[4803]: E0127 22:26:20.309172 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:26:33 crc kubenswrapper[4803]: I0127 22:26:33.307375 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:26:33 crc kubenswrapper[4803]: E0127 22:26:33.308288 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:26:34 crc kubenswrapper[4803]: I0127 22:26:34.552507 4803 scope.go:117] "RemoveContainer" containerID="e351a744d7fe6d1ae3aec5a7563af071f38139a7507eff32e5a87bd498e58ef4" Jan 27 22:26:47 crc kubenswrapper[4803]: I0127 22:26:47.306398 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:26:47 crc kubenswrapper[4803]: E0127 22:26:47.307191 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:27:01 crc kubenswrapper[4803]: I0127 22:27:01.307281 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:27:01 crc kubenswrapper[4803]: E0127 22:27:01.308189 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:27:13 crc kubenswrapper[4803]: I0127 22:27:13.883143 4803 generic.go:334] "Generic (PLEG): container finished" podID="a0677e52-1a37-44b2-9627-6cb40b6d6f6d" containerID="bc20c234a38d23a61ca4615520543386c47a5149028929d756b1fd1e59e0ce37" exitCode=0 Jan 27 22:27:13 crc kubenswrapper[4803]: I0127 22:27:13.883238 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" event={"ID":"a0677e52-1a37-44b2-9627-6cb40b6d6f6d","Type":"ContainerDied","Data":"bc20c234a38d23a61ca4615520543386c47a5149028929d756b1fd1e59e0ce37"} Jan 27 22:27:14 crc kubenswrapper[4803]: I0127 22:27:14.307086 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:27:14 crc kubenswrapper[4803]: E0127 22:27:14.307731 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.433076 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.486744 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-inventory\") pod \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.487192 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmc4n\" (UniqueName: \"kubernetes.io/projected/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-kube-api-access-vmc4n\") pod \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.487257 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ssh-key-openstack-edpm-ipam\") pod \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.487297 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovncontroller-config-0\") pod \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.487446 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovn-combined-ca-bundle\") pod \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\" (UID: \"a0677e52-1a37-44b2-9627-6cb40b6d6f6d\") " Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.494168 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-kube-api-access-vmc4n" (OuterVolumeSpecName: "kube-api-access-vmc4n") pod "a0677e52-1a37-44b2-9627-6cb40b6d6f6d" (UID: "a0677e52-1a37-44b2-9627-6cb40b6d6f6d"). InnerVolumeSpecName "kube-api-access-vmc4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.497050 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a0677e52-1a37-44b2-9627-6cb40b6d6f6d" (UID: "a0677e52-1a37-44b2-9627-6cb40b6d6f6d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.522674 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a0677e52-1a37-44b2-9627-6cb40b6d6f6d" (UID: "a0677e52-1a37-44b2-9627-6cb40b6d6f6d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.525072 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-inventory" (OuterVolumeSpecName: "inventory") pod "a0677e52-1a37-44b2-9627-6cb40b6d6f6d" (UID: "a0677e52-1a37-44b2-9627-6cb40b6d6f6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.547287 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a0677e52-1a37-44b2-9627-6cb40b6d6f6d" (UID: "a0677e52-1a37-44b2-9627-6cb40b6d6f6d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.589918 4803 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.589968 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.589979 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmc4n\" (UniqueName: \"kubernetes.io/projected/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-kube-api-access-vmc4n\") on node \"crc\" DevicePath \"\"" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.589991 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.590003 4803 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a0677e52-1a37-44b2-9627-6cb40b6d6f6d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.904296 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" event={"ID":"a0677e52-1a37-44b2-9627-6cb40b6d6f6d","Type":"ContainerDied","Data":"be8f44e4a8ab5801d588c6cd03ae04559f2f8f197769f72facd534e272bddba3"} Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.904655 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be8f44e4a8ab5801d588c6cd03ae04559f2f8f197769f72facd534e272bddba3" Jan 27 22:27:15 crc kubenswrapper[4803]: I0127 22:27:15.904398 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nqn7k" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.012722 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk"] Jan 27 22:27:16 crc kubenswrapper[4803]: E0127 22:27:16.013181 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0677e52-1a37-44b2-9627-6cb40b6d6f6d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.013193 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0677e52-1a37-44b2-9627-6cb40b6d6f6d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.013431 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0677e52-1a37-44b2-9627-6cb40b6d6f6d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.014198 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.016792 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.017956 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.017999 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.018121 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.017969 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.018235 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.027501 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk"] Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.099518 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.099598 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kmbx\" (UniqueName: \"kubernetes.io/projected/bc53f142-98d1-4024-b27b-923de13b8c31-kube-api-access-6kmbx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.099699 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.099765 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.099799 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.099908 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.202120 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.202250 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.202340 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.202466 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.202646 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.202693 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kmbx\" (UniqueName: \"kubernetes.io/projected/bc53f142-98d1-4024-b27b-923de13b8c31-kube-api-access-6kmbx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.209832 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.209957 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.210497 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.211023 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.217047 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.219199 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kmbx\" (UniqueName: \"kubernetes.io/projected/bc53f142-98d1-4024-b27b-923de13b8c31-kube-api-access-6kmbx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.341369 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.847658 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk"] Jan 27 22:27:16 crc kubenswrapper[4803]: W0127 22:27:16.851293 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc53f142_98d1_4024_b27b_923de13b8c31.slice/crio-eaf65811602d7a3f84e29935df8a35821388ef153702a14b8a35705e2e0ca1ac WatchSource:0}: Error finding container eaf65811602d7a3f84e29935df8a35821388ef153702a14b8a35705e2e0ca1ac: Status 404 returned error can't find the container with id eaf65811602d7a3f84e29935df8a35821388ef153702a14b8a35705e2e0ca1ac Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.855553 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 22:27:16 crc kubenswrapper[4803]: I0127 22:27:16.917474 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" event={"ID":"bc53f142-98d1-4024-b27b-923de13b8c31","Type":"ContainerStarted","Data":"eaf65811602d7a3f84e29935df8a35821388ef153702a14b8a35705e2e0ca1ac"} Jan 27 22:27:17 crc kubenswrapper[4803]: I0127 22:27:17.929410 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" event={"ID":"bc53f142-98d1-4024-b27b-923de13b8c31","Type":"ContainerStarted","Data":"b7b0ac1d66b101e735347f3609d8891ef7b0bcdc65c338f3d8f00c94a5c123c6"} Jan 27 22:27:17 crc kubenswrapper[4803]: I0127 22:27:17.955921 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" podStartSLOduration=2.303889919 podStartE2EDuration="2.955901476s" podCreationTimestamp="2026-01-27 22:27:15 +0000 UTC" firstStartedPulling="2026-01-27 22:27:16.855274637 +0000 UTC m=+2389.271296336" lastFinishedPulling="2026-01-27 22:27:17.507286194 +0000 UTC m=+2389.923307893" observedRunningTime="2026-01-27 22:27:17.945409804 +0000 UTC m=+2390.361431543" watchObservedRunningTime="2026-01-27 22:27:17.955901476 +0000 UTC m=+2390.371923175" Jan 27 22:27:25 crc kubenswrapper[4803]: I0127 22:27:25.307823 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:27:25 crc kubenswrapper[4803]: E0127 22:27:25.308721 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:27:38 crc kubenswrapper[4803]: I0127 22:27:38.323028 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:27:38 crc kubenswrapper[4803]: E0127 22:27:38.324364 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:27:51 crc kubenswrapper[4803]: I0127 22:27:51.307488 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:27:51 crc kubenswrapper[4803]: E0127 22:27:51.309442 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:28:04 crc kubenswrapper[4803]: I0127 22:28:04.307101 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:28:04 crc kubenswrapper[4803]: E0127 22:28:04.308022 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:28:05 crc kubenswrapper[4803]: I0127 22:28:05.422732 4803 generic.go:334] "Generic (PLEG): container finished" podID="bc53f142-98d1-4024-b27b-923de13b8c31" containerID="b7b0ac1d66b101e735347f3609d8891ef7b0bcdc65c338f3d8f00c94a5c123c6" exitCode=0 Jan 27 22:28:05 crc kubenswrapper[4803]: I0127 22:28:05.422795 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" event={"ID":"bc53f142-98d1-4024-b27b-923de13b8c31","Type":"ContainerDied","Data":"b7b0ac1d66b101e735347f3609d8891ef7b0bcdc65c338f3d8f00c94a5c123c6"} Jan 27 22:28:06 crc kubenswrapper[4803]: I0127 22:28:06.972429 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.099590 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-metadata-combined-ca-bundle\") pod \"bc53f142-98d1-4024-b27b-923de13b8c31\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.099697 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-nova-metadata-neutron-config-0\") pod \"bc53f142-98d1-4024-b27b-923de13b8c31\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.099760 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-inventory\") pod \"bc53f142-98d1-4024-b27b-923de13b8c31\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.099827 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-ovn-metadata-agent-neutron-config-0\") pod \"bc53f142-98d1-4024-b27b-923de13b8c31\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.099924 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kmbx\" (UniqueName: \"kubernetes.io/projected/bc53f142-98d1-4024-b27b-923de13b8c31-kube-api-access-6kmbx\") pod \"bc53f142-98d1-4024-b27b-923de13b8c31\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.099963 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-ssh-key-openstack-edpm-ipam\") pod \"bc53f142-98d1-4024-b27b-923de13b8c31\" (UID: \"bc53f142-98d1-4024-b27b-923de13b8c31\") " Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.110230 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc53f142-98d1-4024-b27b-923de13b8c31-kube-api-access-6kmbx" (OuterVolumeSpecName: "kube-api-access-6kmbx") pod "bc53f142-98d1-4024-b27b-923de13b8c31" (UID: "bc53f142-98d1-4024-b27b-923de13b8c31"). InnerVolumeSpecName "kube-api-access-6kmbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.111056 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "bc53f142-98d1-4024-b27b-923de13b8c31" (UID: "bc53f142-98d1-4024-b27b-923de13b8c31"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.132375 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bc53f142-98d1-4024-b27b-923de13b8c31" (UID: "bc53f142-98d1-4024-b27b-923de13b8c31"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.136711 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "bc53f142-98d1-4024-b27b-923de13b8c31" (UID: "bc53f142-98d1-4024-b27b-923de13b8c31"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.142896 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "bc53f142-98d1-4024-b27b-923de13b8c31" (UID: "bc53f142-98d1-4024-b27b-923de13b8c31"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.155103 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-inventory" (OuterVolumeSpecName: "inventory") pod "bc53f142-98d1-4024-b27b-923de13b8c31" (UID: "bc53f142-98d1-4024-b27b-923de13b8c31"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.204355 4803 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.204408 4803 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.204422 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.204431 4803 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.204441 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kmbx\" (UniqueName: \"kubernetes.io/projected/bc53f142-98d1-4024-b27b-923de13b8c31-kube-api-access-6kmbx\") on node \"crc\" DevicePath \"\"" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.204450 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc53f142-98d1-4024-b27b-923de13b8c31-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.444736 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" event={"ID":"bc53f142-98d1-4024-b27b-923de13b8c31","Type":"ContainerDied","Data":"eaf65811602d7a3f84e29935df8a35821388ef153702a14b8a35705e2e0ca1ac"} Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.445058 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaf65811602d7a3f84e29935df8a35821388ef153702a14b8a35705e2e0ca1ac" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.444791 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.557193 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j"] Jan 27 22:28:07 crc kubenswrapper[4803]: E0127 22:28:07.557988 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc53f142-98d1-4024-b27b-923de13b8c31" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.558020 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc53f142-98d1-4024-b27b-923de13b8c31" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.558430 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc53f142-98d1-4024-b27b-923de13b8c31" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.559661 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.562222 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.562306 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.562403 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.562462 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.562568 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.572078 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j"] Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.615039 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.615131 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.615160 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.615261 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk7c5\" (UniqueName: \"kubernetes.io/projected/8f694a10-2165-4256-8f2e-8c7691864c37-kube-api-access-jk7c5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.615279 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.717994 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk7c5\" (UniqueName: \"kubernetes.io/projected/8f694a10-2165-4256-8f2e-8c7691864c37-kube-api-access-jk7c5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.718045 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.718175 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.718249 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.718282 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.722639 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.722984 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.723955 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.725773 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.738943 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk7c5\" (UniqueName: \"kubernetes.io/projected/8f694a10-2165-4256-8f2e-8c7691864c37-kube-api-access-jk7c5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g567j\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:07 crc kubenswrapper[4803]: I0127 22:28:07.892289 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:28:08 crc kubenswrapper[4803]: W0127 22:28:08.448174 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f694a10_2165_4256_8f2e_8c7691864c37.slice/crio-cfda8c7d2c8214f84649a216096c0529b5bbd55812a6c6e57ade7f9c358bc12e WatchSource:0}: Error finding container cfda8c7d2c8214f84649a216096c0529b5bbd55812a6c6e57ade7f9c358bc12e: Status 404 returned error can't find the container with id cfda8c7d2c8214f84649a216096c0529b5bbd55812a6c6e57ade7f9c358bc12e Jan 27 22:28:08 crc kubenswrapper[4803]: I0127 22:28:08.449428 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j"] Jan 27 22:28:09 crc kubenswrapper[4803]: I0127 22:28:09.466735 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" event={"ID":"8f694a10-2165-4256-8f2e-8c7691864c37","Type":"ContainerStarted","Data":"9d30abc3ff0715ed3a45fd03a358550e6c531aedaf21636467d376352d4dd626"} Jan 27 22:28:09 crc kubenswrapper[4803]: I0127 22:28:09.467369 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" event={"ID":"8f694a10-2165-4256-8f2e-8c7691864c37","Type":"ContainerStarted","Data":"cfda8c7d2c8214f84649a216096c0529b5bbd55812a6c6e57ade7f9c358bc12e"} Jan 27 22:28:09 crc kubenswrapper[4803]: I0127 22:28:09.488488 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" podStartSLOduration=2.056166828 podStartE2EDuration="2.488471312s" podCreationTimestamp="2026-01-27 22:28:07 +0000 UTC" firstStartedPulling="2026-01-27 22:28:08.453105979 +0000 UTC m=+2440.869127678" lastFinishedPulling="2026-01-27 22:28:08.885410463 +0000 UTC m=+2441.301432162" observedRunningTime="2026-01-27 22:28:09.481793423 +0000 UTC m=+2441.897815112" watchObservedRunningTime="2026-01-27 22:28:09.488471312 +0000 UTC m=+2441.904493011" Jan 27 22:28:17 crc kubenswrapper[4803]: I0127 22:28:17.307144 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:28:17 crc kubenswrapper[4803]: E0127 22:28:17.308058 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:28:28 crc kubenswrapper[4803]: I0127 22:28:28.342729 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:28:28 crc kubenswrapper[4803]: E0127 22:28:28.343591 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:28:42 crc kubenswrapper[4803]: I0127 22:28:42.307554 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:28:42 crc kubenswrapper[4803]: E0127 22:28:42.308405 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:28:57 crc kubenswrapper[4803]: I0127 22:28:57.307011 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:28:57 crc kubenswrapper[4803]: E0127 22:28:57.307902 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:29:11 crc kubenswrapper[4803]: I0127 22:29:11.307170 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:29:11 crc kubenswrapper[4803]: E0127 22:29:11.307885 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:29:26 crc kubenswrapper[4803]: I0127 22:29:26.307131 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:29:27 crc kubenswrapper[4803]: I0127 22:29:27.368123 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"aad6d684a8fe4b6e35ed33e9eec548144cbbc49c598e8df03d796cb382eedc86"} Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.188181 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2"] Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.191478 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.193895 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.194153 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.213285 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2"] Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.254626 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1775032d-620d-4e75-808b-eef53841271a-secret-volume\") pod \"collect-profiles-29492550-sbkn2\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.254716 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1775032d-620d-4e75-808b-eef53841271a-config-volume\") pod \"collect-profiles-29492550-sbkn2\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.254807 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqxrg\" (UniqueName: \"kubernetes.io/projected/1775032d-620d-4e75-808b-eef53841271a-kube-api-access-sqxrg\") pod \"collect-profiles-29492550-sbkn2\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.356662 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1775032d-620d-4e75-808b-eef53841271a-secret-volume\") pod \"collect-profiles-29492550-sbkn2\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.356755 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1775032d-620d-4e75-808b-eef53841271a-config-volume\") pod \"collect-profiles-29492550-sbkn2\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.356899 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqxrg\" (UniqueName: \"kubernetes.io/projected/1775032d-620d-4e75-808b-eef53841271a-kube-api-access-sqxrg\") pod \"collect-profiles-29492550-sbkn2\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.359232 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1775032d-620d-4e75-808b-eef53841271a-config-volume\") pod \"collect-profiles-29492550-sbkn2\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.363812 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1775032d-620d-4e75-808b-eef53841271a-secret-volume\") pod \"collect-profiles-29492550-sbkn2\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.374679 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqxrg\" (UniqueName: \"kubernetes.io/projected/1775032d-620d-4e75-808b-eef53841271a-kube-api-access-sqxrg\") pod \"collect-profiles-29492550-sbkn2\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:00 crc kubenswrapper[4803]: I0127 22:30:00.517084 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:01 crc kubenswrapper[4803]: I0127 22:30:01.014800 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2"] Jan 27 22:30:01 crc kubenswrapper[4803]: I0127 22:30:01.701654 4803 generic.go:334] "Generic (PLEG): container finished" podID="1775032d-620d-4e75-808b-eef53841271a" containerID="f99baa81b45bb177cf58ad26fe1328c949599626ea71140cc1e0ec92e9d4d4ac" exitCode=0 Jan 27 22:30:01 crc kubenswrapper[4803]: I0127 22:30:01.701751 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" event={"ID":"1775032d-620d-4e75-808b-eef53841271a","Type":"ContainerDied","Data":"f99baa81b45bb177cf58ad26fe1328c949599626ea71140cc1e0ec92e9d4d4ac"} Jan 27 22:30:01 crc kubenswrapper[4803]: I0127 22:30:01.702224 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" event={"ID":"1775032d-620d-4e75-808b-eef53841271a","Type":"ContainerStarted","Data":"33ed88a128aae9f7bc6c9a39dbbbd1d201eedb6b776e96848080a38a129514ad"} Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.152771 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.321524 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1775032d-620d-4e75-808b-eef53841271a-secret-volume\") pod \"1775032d-620d-4e75-808b-eef53841271a\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.321599 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1775032d-620d-4e75-808b-eef53841271a-config-volume\") pod \"1775032d-620d-4e75-808b-eef53841271a\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.321689 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqxrg\" (UniqueName: \"kubernetes.io/projected/1775032d-620d-4e75-808b-eef53841271a-kube-api-access-sqxrg\") pod \"1775032d-620d-4e75-808b-eef53841271a\" (UID: \"1775032d-620d-4e75-808b-eef53841271a\") " Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.323367 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1775032d-620d-4e75-808b-eef53841271a-config-volume" (OuterVolumeSpecName: "config-volume") pod "1775032d-620d-4e75-808b-eef53841271a" (UID: "1775032d-620d-4e75-808b-eef53841271a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.324470 4803 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1775032d-620d-4e75-808b-eef53841271a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.331542 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1775032d-620d-4e75-808b-eef53841271a-kube-api-access-sqxrg" (OuterVolumeSpecName: "kube-api-access-sqxrg") pod "1775032d-620d-4e75-808b-eef53841271a" (UID: "1775032d-620d-4e75-808b-eef53841271a"). InnerVolumeSpecName "kube-api-access-sqxrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.332121 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1775032d-620d-4e75-808b-eef53841271a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1775032d-620d-4e75-808b-eef53841271a" (UID: "1775032d-620d-4e75-808b-eef53841271a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.425671 4803 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1775032d-620d-4e75-808b-eef53841271a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.425707 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqxrg\" (UniqueName: \"kubernetes.io/projected/1775032d-620d-4e75-808b-eef53841271a-kube-api-access-sqxrg\") on node \"crc\" DevicePath \"\"" Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.726881 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" event={"ID":"1775032d-620d-4e75-808b-eef53841271a","Type":"ContainerDied","Data":"33ed88a128aae9f7bc6c9a39dbbbd1d201eedb6b776e96848080a38a129514ad"} Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.726922 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33ed88a128aae9f7bc6c9a39dbbbd1d201eedb6b776e96848080a38a129514ad" Jan 27 22:30:03 crc kubenswrapper[4803]: I0127 22:30:03.726978 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2" Jan 27 22:30:04 crc kubenswrapper[4803]: I0127 22:30:04.228297 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn"] Jan 27 22:30:04 crc kubenswrapper[4803]: I0127 22:30:04.238426 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492505-22jdn"] Jan 27 22:30:04 crc kubenswrapper[4803]: I0127 22:30:04.328374 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3f105d-fa65-4c69-b14e-aac96d07c7e9" path="/var/lib/kubelet/pods/dc3f105d-fa65-4c69-b14e-aac96d07c7e9/volumes" Jan 27 22:30:34 crc kubenswrapper[4803]: I0127 22:30:34.695028 4803 scope.go:117] "RemoveContainer" containerID="ad9bedb4b2a967814717b0c63a94ab9d31b10a8d5f4dc8ee85afc7d8a08d5a01" Jan 27 22:31:46 crc kubenswrapper[4803]: I0127 22:31:46.343539 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:31:46 crc kubenswrapper[4803]: I0127 22:31:46.344231 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:32:16 crc kubenswrapper[4803]: I0127 22:32:16.233573 4803 generic.go:334] "Generic (PLEG): container finished" podID="8f694a10-2165-4256-8f2e-8c7691864c37" containerID="9d30abc3ff0715ed3a45fd03a358550e6c531aedaf21636467d376352d4dd626" exitCode=0 Jan 27 22:32:16 crc kubenswrapper[4803]: I0127 22:32:16.233656 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" event={"ID":"8f694a10-2165-4256-8f2e-8c7691864c37","Type":"ContainerDied","Data":"9d30abc3ff0715ed3a45fd03a358550e6c531aedaf21636467d376352d4dd626"} Jan 27 22:32:16 crc kubenswrapper[4803]: I0127 22:32:16.343478 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:32:16 crc kubenswrapper[4803]: I0127 22:32:16.343541 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.753949 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.891301 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-inventory\") pod \"8f694a10-2165-4256-8f2e-8c7691864c37\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.891367 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-ssh-key-openstack-edpm-ipam\") pod \"8f694a10-2165-4256-8f2e-8c7691864c37\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.892298 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-combined-ca-bundle\") pod \"8f694a10-2165-4256-8f2e-8c7691864c37\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.892402 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk7c5\" (UniqueName: \"kubernetes.io/projected/8f694a10-2165-4256-8f2e-8c7691864c37-kube-api-access-jk7c5\") pod \"8f694a10-2165-4256-8f2e-8c7691864c37\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.892613 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-secret-0\") pod \"8f694a10-2165-4256-8f2e-8c7691864c37\" (UID: \"8f694a10-2165-4256-8f2e-8c7691864c37\") " Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.898264 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f694a10-2165-4256-8f2e-8c7691864c37-kube-api-access-jk7c5" (OuterVolumeSpecName: "kube-api-access-jk7c5") pod "8f694a10-2165-4256-8f2e-8c7691864c37" (UID: "8f694a10-2165-4256-8f2e-8c7691864c37"). InnerVolumeSpecName "kube-api-access-jk7c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.898602 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8f694a10-2165-4256-8f2e-8c7691864c37" (UID: "8f694a10-2165-4256-8f2e-8c7691864c37"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.923495 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8f694a10-2165-4256-8f2e-8c7691864c37" (UID: "8f694a10-2165-4256-8f2e-8c7691864c37"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.924723 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-inventory" (OuterVolumeSpecName: "inventory") pod "8f694a10-2165-4256-8f2e-8c7691864c37" (UID: "8f694a10-2165-4256-8f2e-8c7691864c37"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.944309 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8f694a10-2165-4256-8f2e-8c7691864c37" (UID: "8f694a10-2165-4256-8f2e-8c7691864c37"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:32:17 crc kubenswrapper[4803]: I0127 22:32:17.999918 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.000029 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.000075 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk7c5\" (UniqueName: \"kubernetes.io/projected/8f694a10-2165-4256-8f2e-8c7691864c37-kube-api-access-jk7c5\") on node \"crc\" DevicePath \"\"" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.000087 4803 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.000101 4803 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8f694a10-2165-4256-8f2e-8c7691864c37-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.256190 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" event={"ID":"8f694a10-2165-4256-8f2e-8c7691864c37","Type":"ContainerDied","Data":"cfda8c7d2c8214f84649a216096c0529b5bbd55812a6c6e57ade7f9c358bc12e"} Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.256226 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfda8c7d2c8214f84649a216096c0529b5bbd55812a6c6e57ade7f9c358bc12e" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.256354 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g567j" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.407718 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv"] Jan 27 22:32:18 crc kubenswrapper[4803]: E0127 22:32:18.408354 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f694a10-2165-4256-8f2e-8c7691864c37" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.408375 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f694a10-2165-4256-8f2e-8c7691864c37" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 22:32:18 crc kubenswrapper[4803]: E0127 22:32:18.408391 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1775032d-620d-4e75-808b-eef53841271a" containerName="collect-profiles" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.408397 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="1775032d-620d-4e75-808b-eef53841271a" containerName="collect-profiles" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.408620 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f694a10-2165-4256-8f2e-8c7691864c37" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.408650 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="1775032d-620d-4e75-808b-eef53841271a" containerName="collect-profiles" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.409493 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.412148 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.412164 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.412715 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.416221 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.416243 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.416458 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.421154 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.425893 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv"] Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.525354 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.525454 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8809871a-286e-42ac-8156-13ad485cf174-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.525484 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.525647 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.525787 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljr2\" (UniqueName: \"kubernetes.io/projected/8809871a-286e-42ac-8156-13ad485cf174-kube-api-access-dljr2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.526000 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.526048 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.526114 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.526230 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.628805 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.628926 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dljr2\" (UniqueName: \"kubernetes.io/projected/8809871a-286e-42ac-8156-13ad485cf174-kube-api-access-dljr2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.629020 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.629047 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.629081 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.629161 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.629207 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.629292 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8809871a-286e-42ac-8156-13ad485cf174-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.629327 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.630150 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8809871a-286e-42ac-8156-13ad485cf174-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.633408 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.634064 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.634640 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.635290 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.638623 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.638767 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.641616 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.649497 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dljr2\" (UniqueName: \"kubernetes.io/projected/8809871a-286e-42ac-8156-13ad485cf174-kube-api-access-dljr2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-blsvv\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:18 crc kubenswrapper[4803]: I0127 22:32:18.740575 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:32:19 crc kubenswrapper[4803]: I0127 22:32:19.325050 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv"] Jan 27 22:32:19 crc kubenswrapper[4803]: W0127 22:32:19.328731 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8809871a_286e_42ac_8156_13ad485cf174.slice/crio-0357d5d1651f616bb2c9dc42f4a21dab0d72ba133d1d305e73d11b5a0a9ed830 WatchSource:0}: Error finding container 0357d5d1651f616bb2c9dc42f4a21dab0d72ba133d1d305e73d11b5a0a9ed830: Status 404 returned error can't find the container with id 0357d5d1651f616bb2c9dc42f4a21dab0d72ba133d1d305e73d11b5a0a9ed830 Jan 27 22:32:19 crc kubenswrapper[4803]: I0127 22:32:19.331115 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 22:32:20 crc kubenswrapper[4803]: I0127 22:32:20.287048 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" event={"ID":"8809871a-286e-42ac-8156-13ad485cf174","Type":"ContainerStarted","Data":"f08e43cd152a78b0f1a4c1ebd8876e589e2439be89fbcbf1b19347755e8e63da"} Jan 27 22:32:20 crc kubenswrapper[4803]: I0127 22:32:20.287820 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" event={"ID":"8809871a-286e-42ac-8156-13ad485cf174","Type":"ContainerStarted","Data":"0357d5d1651f616bb2c9dc42f4a21dab0d72ba133d1d305e73d11b5a0a9ed830"} Jan 27 22:32:20 crc kubenswrapper[4803]: I0127 22:32:20.340293 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" podStartSLOduration=1.8033415179999999 podStartE2EDuration="2.340270987s" podCreationTimestamp="2026-01-27 22:32:18 +0000 UTC" firstStartedPulling="2026-01-27 22:32:19.33088228 +0000 UTC m=+2691.746903979" lastFinishedPulling="2026-01-27 22:32:19.867811759 +0000 UTC m=+2692.283833448" observedRunningTime="2026-01-27 22:32:20.332885154 +0000 UTC m=+2692.748906863" watchObservedRunningTime="2026-01-27 22:32:20.340270987 +0000 UTC m=+2692.756292686" Jan 27 22:32:46 crc kubenswrapper[4803]: I0127 22:32:46.343298 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:32:46 crc kubenswrapper[4803]: I0127 22:32:46.343867 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:32:46 crc kubenswrapper[4803]: I0127 22:32:46.343920 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:32:46 crc kubenswrapper[4803]: I0127 22:32:46.344868 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aad6d684a8fe4b6e35ed33e9eec548144cbbc49c598e8df03d796cb382eedc86"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:32:46 crc kubenswrapper[4803]: I0127 22:32:46.344927 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://aad6d684a8fe4b6e35ed33e9eec548144cbbc49c598e8df03d796cb382eedc86" gracePeriod=600 Jan 27 22:32:46 crc kubenswrapper[4803]: I0127 22:32:46.603128 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="aad6d684a8fe4b6e35ed33e9eec548144cbbc49c598e8df03d796cb382eedc86" exitCode=0 Jan 27 22:32:46 crc kubenswrapper[4803]: I0127 22:32:46.603173 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"aad6d684a8fe4b6e35ed33e9eec548144cbbc49c598e8df03d796cb382eedc86"} Jan 27 22:32:46 crc kubenswrapper[4803]: I0127 22:32:46.603210 4803 scope.go:117] "RemoveContainer" containerID="6a22355df9054ebde45456449adf017c78666422d56a76e69c35237cafa024ff" Jan 27 22:32:47 crc kubenswrapper[4803]: I0127 22:32:47.613034 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b"} Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.048569 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nb72q"] Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.051513 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.084714 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nb72q"] Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.216752 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28gn6\" (UniqueName: \"kubernetes.io/projected/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-kube-api-access-28gn6\") pod \"certified-operators-nb72q\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.217158 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-utilities\") pod \"certified-operators-nb72q\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.217408 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-catalog-content\") pod \"certified-operators-nb72q\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.319938 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28gn6\" (UniqueName: \"kubernetes.io/projected/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-kube-api-access-28gn6\") pod \"certified-operators-nb72q\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.320235 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-utilities\") pod \"certified-operators-nb72q\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.320406 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-catalog-content\") pod \"certified-operators-nb72q\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.320720 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-utilities\") pod \"certified-operators-nb72q\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.320770 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-catalog-content\") pod \"certified-operators-nb72q\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.341697 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28gn6\" (UniqueName: \"kubernetes.io/projected/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-kube-api-access-28gn6\") pod \"certified-operators-nb72q\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.373966 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:21 crc kubenswrapper[4803]: I0127 22:34:21.948258 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nb72q"] Jan 27 22:34:22 crc kubenswrapper[4803]: I0127 22:34:22.618981 4803 generic.go:334] "Generic (PLEG): container finished" podID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" containerID="42493d0278d4ca6bb4e24ba87058b404cac634d12bdd1e6ffbc4dc25f68b00d7" exitCode=0 Jan 27 22:34:22 crc kubenswrapper[4803]: I0127 22:34:22.619189 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb72q" event={"ID":"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902","Type":"ContainerDied","Data":"42493d0278d4ca6bb4e24ba87058b404cac634d12bdd1e6ffbc4dc25f68b00d7"} Jan 27 22:34:22 crc kubenswrapper[4803]: I0127 22:34:22.619597 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb72q" event={"ID":"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902","Type":"ContainerStarted","Data":"c0ba62b09bb1843aad7116ec4db0d22058cd31dce1fb83bea2bc17197da21081"} Jan 27 22:34:23 crc kubenswrapper[4803]: I0127 22:34:23.634048 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb72q" event={"ID":"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902","Type":"ContainerStarted","Data":"34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e"} Jan 27 22:34:25 crc kubenswrapper[4803]: I0127 22:34:25.652475 4803 generic.go:334] "Generic (PLEG): container finished" podID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" containerID="34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e" exitCode=0 Jan 27 22:34:25 crc kubenswrapper[4803]: I0127 22:34:25.652550 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb72q" event={"ID":"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902","Type":"ContainerDied","Data":"34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e"} Jan 27 22:34:26 crc kubenswrapper[4803]: I0127 22:34:26.668143 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb72q" event={"ID":"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902","Type":"ContainerStarted","Data":"88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec"} Jan 27 22:34:26 crc kubenswrapper[4803]: I0127 22:34:26.729051 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nb72q" podStartSLOduration=2.248057708 podStartE2EDuration="5.729023879s" podCreationTimestamp="2026-01-27 22:34:21 +0000 UTC" firstStartedPulling="2026-01-27 22:34:22.625447425 +0000 UTC m=+2815.041469154" lastFinishedPulling="2026-01-27 22:34:26.106413636 +0000 UTC m=+2818.522435325" observedRunningTime="2026-01-27 22:34:26.722330684 +0000 UTC m=+2819.138352393" watchObservedRunningTime="2026-01-27 22:34:26.729023879 +0000 UTC m=+2819.145045578" Jan 27 22:34:31 crc kubenswrapper[4803]: I0127 22:34:31.375496 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:31 crc kubenswrapper[4803]: I0127 22:34:31.376330 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:31 crc kubenswrapper[4803]: I0127 22:34:31.436593 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:31 crc kubenswrapper[4803]: I0127 22:34:31.791297 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:31 crc kubenswrapper[4803]: I0127 22:34:31.853583 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nb72q"] Jan 27 22:34:33 crc kubenswrapper[4803]: I0127 22:34:33.737256 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nb72q" podUID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" containerName="registry-server" containerID="cri-o://88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec" gracePeriod=2 Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.292025 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.473635 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28gn6\" (UniqueName: \"kubernetes.io/projected/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-kube-api-access-28gn6\") pod \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.473805 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-catalog-content\") pod \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.473864 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-utilities\") pod \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\" (UID: \"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902\") " Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.474913 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-utilities" (OuterVolumeSpecName: "utilities") pod "b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" (UID: "b0e0cb44-a8cf-4a23-9db6-c3e301cb8902"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.485110 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-kube-api-access-28gn6" (OuterVolumeSpecName: "kube-api-access-28gn6") pod "b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" (UID: "b0e0cb44-a8cf-4a23-9db6-c3e301cb8902"). InnerVolumeSpecName "kube-api-access-28gn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.527576 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" (UID: "b0e0cb44-a8cf-4a23-9db6-c3e301cb8902"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.576173 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28gn6\" (UniqueName: \"kubernetes.io/projected/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-kube-api-access-28gn6\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.576203 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.576213 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.749658 4803 generic.go:334] "Generic (PLEG): container finished" podID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" containerID="88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec" exitCode=0 Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.749727 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nb72q" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.749712 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb72q" event={"ID":"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902","Type":"ContainerDied","Data":"88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec"} Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.750078 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb72q" event={"ID":"b0e0cb44-a8cf-4a23-9db6-c3e301cb8902","Type":"ContainerDied","Data":"c0ba62b09bb1843aad7116ec4db0d22058cd31dce1fb83bea2bc17197da21081"} Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.750104 4803 scope.go:117] "RemoveContainer" containerID="88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.786699 4803 scope.go:117] "RemoveContainer" containerID="34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.791903 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nb72q"] Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.803438 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nb72q"] Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.815099 4803 scope.go:117] "RemoveContainer" containerID="42493d0278d4ca6bb4e24ba87058b404cac634d12bdd1e6ffbc4dc25f68b00d7" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.894689 4803 scope.go:117] "RemoveContainer" containerID="88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec" Jan 27 22:34:34 crc kubenswrapper[4803]: E0127 22:34:34.895228 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec\": container with ID starting with 88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec not found: ID does not exist" containerID="88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.895273 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec"} err="failed to get container status \"88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec\": rpc error: code = NotFound desc = could not find container \"88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec\": container with ID starting with 88e1b2d45ff36fd33b82ee3669fb1d162a5316842bb75a8af0265e0d641636ec not found: ID does not exist" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.895298 4803 scope.go:117] "RemoveContainer" containerID="34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e" Jan 27 22:34:34 crc kubenswrapper[4803]: E0127 22:34:34.895577 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e\": container with ID starting with 34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e not found: ID does not exist" containerID="34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.895660 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e"} err="failed to get container status \"34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e\": rpc error: code = NotFound desc = could not find container \"34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e\": container with ID starting with 34a1aceabfba92c95bb5e82ac0c688abf891e4288df9e187de971da7eec4380e not found: ID does not exist" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.895681 4803 scope.go:117] "RemoveContainer" containerID="42493d0278d4ca6bb4e24ba87058b404cac634d12bdd1e6ffbc4dc25f68b00d7" Jan 27 22:34:34 crc kubenswrapper[4803]: E0127 22:34:34.896161 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42493d0278d4ca6bb4e24ba87058b404cac634d12bdd1e6ffbc4dc25f68b00d7\": container with ID starting with 42493d0278d4ca6bb4e24ba87058b404cac634d12bdd1e6ffbc4dc25f68b00d7 not found: ID does not exist" containerID="42493d0278d4ca6bb4e24ba87058b404cac634d12bdd1e6ffbc4dc25f68b00d7" Jan 27 22:34:34 crc kubenswrapper[4803]: I0127 22:34:34.896190 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42493d0278d4ca6bb4e24ba87058b404cac634d12bdd1e6ffbc4dc25f68b00d7"} err="failed to get container status \"42493d0278d4ca6bb4e24ba87058b404cac634d12bdd1e6ffbc4dc25f68b00d7\": rpc error: code = NotFound desc = could not find container \"42493d0278d4ca6bb4e24ba87058b404cac634d12bdd1e6ffbc4dc25f68b00d7\": container with ID starting with 42493d0278d4ca6bb4e24ba87058b404cac634d12bdd1e6ffbc4dc25f68b00d7 not found: ID does not exist" Jan 27 22:34:36 crc kubenswrapper[4803]: I0127 22:34:36.321204 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" path="/var/lib/kubelet/pods/b0e0cb44-a8cf-4a23-9db6-c3e301cb8902/volumes" Jan 27 22:34:39 crc kubenswrapper[4803]: I0127 22:34:39.816911 4803 generic.go:334] "Generic (PLEG): container finished" podID="8809871a-286e-42ac-8156-13ad485cf174" containerID="f08e43cd152a78b0f1a4c1ebd8876e589e2439be89fbcbf1b19347755e8e63da" exitCode=0 Jan 27 22:34:39 crc kubenswrapper[4803]: I0127 22:34:39.816958 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" event={"ID":"8809871a-286e-42ac-8156-13ad485cf174","Type":"ContainerDied","Data":"f08e43cd152a78b0f1a4c1ebd8876e589e2439be89fbcbf1b19347755e8e63da"} Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.346807 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.456790 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-0\") pod \"8809871a-286e-42ac-8156-13ad485cf174\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.456877 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-ssh-key-openstack-edpm-ipam\") pod \"8809871a-286e-42ac-8156-13ad485cf174\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.456912 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8809871a-286e-42ac-8156-13ad485cf174-nova-extra-config-0\") pod \"8809871a-286e-42ac-8156-13ad485cf174\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.456978 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-combined-ca-bundle\") pod \"8809871a-286e-42ac-8156-13ad485cf174\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.457002 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-1\") pod \"8809871a-286e-42ac-8156-13ad485cf174\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.457033 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-inventory\") pod \"8809871a-286e-42ac-8156-13ad485cf174\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.457154 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-1\") pod \"8809871a-286e-42ac-8156-13ad485cf174\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.457188 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dljr2\" (UniqueName: \"kubernetes.io/projected/8809871a-286e-42ac-8156-13ad485cf174-kube-api-access-dljr2\") pod \"8809871a-286e-42ac-8156-13ad485cf174\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.457553 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-0\") pod \"8809871a-286e-42ac-8156-13ad485cf174\" (UID: \"8809871a-286e-42ac-8156-13ad485cf174\") " Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.465679 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8809871a-286e-42ac-8156-13ad485cf174" (UID: "8809871a-286e-42ac-8156-13ad485cf174"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.465924 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8809871a-286e-42ac-8156-13ad485cf174-kube-api-access-dljr2" (OuterVolumeSpecName: "kube-api-access-dljr2") pod "8809871a-286e-42ac-8156-13ad485cf174" (UID: "8809871a-286e-42ac-8156-13ad485cf174"). InnerVolumeSpecName "kube-api-access-dljr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.500390 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8809871a-286e-42ac-8156-13ad485cf174" (UID: "8809871a-286e-42ac-8156-13ad485cf174"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.500770 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8809871a-286e-42ac-8156-13ad485cf174" (UID: "8809871a-286e-42ac-8156-13ad485cf174"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.503039 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8809871a-286e-42ac-8156-13ad485cf174" (UID: "8809871a-286e-42ac-8156-13ad485cf174"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.506723 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8809871a-286e-42ac-8156-13ad485cf174" (UID: "8809871a-286e-42ac-8156-13ad485cf174"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.507243 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8809871a-286e-42ac-8156-13ad485cf174-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8809871a-286e-42ac-8156-13ad485cf174" (UID: "8809871a-286e-42ac-8156-13ad485cf174"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.514048 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8809871a-286e-42ac-8156-13ad485cf174" (UID: "8809871a-286e-42ac-8156-13ad485cf174"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.526135 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-inventory" (OuterVolumeSpecName: "inventory") pod "8809871a-286e-42ac-8156-13ad485cf174" (UID: "8809871a-286e-42ac-8156-13ad485cf174"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.561269 4803 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.561311 4803 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.561323 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.561333 4803 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.561342 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dljr2\" (UniqueName: \"kubernetes.io/projected/8809871a-286e-42ac-8156-13ad485cf174-kube-api-access-dljr2\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.561351 4803 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.561359 4803 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.561369 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8809871a-286e-42ac-8156-13ad485cf174-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.561377 4803 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8809871a-286e-42ac-8156-13ad485cf174-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.836905 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" event={"ID":"8809871a-286e-42ac-8156-13ad485cf174","Type":"ContainerDied","Data":"0357d5d1651f616bb2c9dc42f4a21dab0d72ba133d1d305e73d11b5a0a9ed830"} Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.836958 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-blsvv" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.836971 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0357d5d1651f616bb2c9dc42f4a21dab0d72ba133d1d305e73d11b5a0a9ed830" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.966461 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7"] Jan 27 22:34:41 crc kubenswrapper[4803]: E0127 22:34:41.967116 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" containerName="extract-content" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.967145 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" containerName="extract-content" Jan 27 22:34:41 crc kubenswrapper[4803]: E0127 22:34:41.967159 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8809871a-286e-42ac-8156-13ad485cf174" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.967168 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8809871a-286e-42ac-8156-13ad485cf174" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 22:34:41 crc kubenswrapper[4803]: E0127 22:34:41.967201 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" containerName="extract-utilities" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.967210 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" containerName="extract-utilities" Jan 27 22:34:41 crc kubenswrapper[4803]: E0127 22:34:41.967236 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" containerName="registry-server" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.967244 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" containerName="registry-server" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.967554 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8809871a-286e-42ac-8156-13ad485cf174" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.967598 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e0cb44-a8cf-4a23-9db6-c3e301cb8902" containerName="registry-server" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.968584 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.972728 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.972915 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.973114 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.973255 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.973313 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:34:41 crc kubenswrapper[4803]: I0127 22:34:41.982425 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7"] Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.074148 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bfh6\" (UniqueName: \"kubernetes.io/projected/a9b994a1-9306-48ee-a202-62a8506f2f15-kube-api-access-4bfh6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.074213 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.074344 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.074367 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.074429 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.074646 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.074777 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.177217 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.177300 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.177343 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.177404 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bfh6\" (UniqueName: \"kubernetes.io/projected/a9b994a1-9306-48ee-a202-62a8506f2f15-kube-api-access-4bfh6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.177429 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.177503 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.177524 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.182496 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.189510 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.191272 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.192366 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.193304 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.202492 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bfh6\" (UniqueName: \"kubernetes.io/projected/a9b994a1-9306-48ee-a202-62a8506f2f15-kube-api-access-4bfh6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.204002 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-plgl7\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.299420 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:34:42 crc kubenswrapper[4803]: I0127 22:34:42.884722 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7"] Jan 27 22:34:43 crc kubenswrapper[4803]: I0127 22:34:43.861454 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" event={"ID":"a9b994a1-9306-48ee-a202-62a8506f2f15","Type":"ContainerStarted","Data":"c8b3bedbbe65685d673ad4e97c28399df1d6a1ad0c786c2dab969329e3b645cd"} Jan 27 22:34:43 crc kubenswrapper[4803]: I0127 22:34:43.861517 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" event={"ID":"a9b994a1-9306-48ee-a202-62a8506f2f15","Type":"ContainerStarted","Data":"cb249a22ce50c59d9f093024ae4023b0cddf3494670e9b8f9d3bae72b4276b91"} Jan 27 22:34:43 crc kubenswrapper[4803]: I0127 22:34:43.898361 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" podStartSLOduration=2.481416733 podStartE2EDuration="2.898342596s" podCreationTimestamp="2026-01-27 22:34:41 +0000 UTC" firstStartedPulling="2026-01-27 22:34:42.891354894 +0000 UTC m=+2835.307376593" lastFinishedPulling="2026-01-27 22:34:43.308280747 +0000 UTC m=+2835.724302456" observedRunningTime="2026-01-27 22:34:43.893793521 +0000 UTC m=+2836.309815220" watchObservedRunningTime="2026-01-27 22:34:43.898342596 +0000 UTC m=+2836.314364295" Jan 27 22:34:46 crc kubenswrapper[4803]: I0127 22:34:46.344228 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:34:46 crc kubenswrapper[4803]: I0127 22:34:46.344986 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:35:16 crc kubenswrapper[4803]: I0127 22:35:16.343245 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:35:16 crc kubenswrapper[4803]: I0127 22:35:16.343806 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.380615 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-smmpp"] Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.383596 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.397946 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-smmpp"] Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.521495 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-utilities\") pod \"redhat-operators-smmpp\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.522288 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5smf\" (UniqueName: \"kubernetes.io/projected/7bf0d30f-86d8-4106-b1c6-34cecb11471e-kube-api-access-z5smf\") pod \"redhat-operators-smmpp\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.522803 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-catalog-content\") pod \"redhat-operators-smmpp\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.625529 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5smf\" (UniqueName: \"kubernetes.io/projected/7bf0d30f-86d8-4106-b1c6-34cecb11471e-kube-api-access-z5smf\") pod \"redhat-operators-smmpp\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.625644 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-catalog-content\") pod \"redhat-operators-smmpp\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.625729 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-utilities\") pod \"redhat-operators-smmpp\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.626207 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-catalog-content\") pod \"redhat-operators-smmpp\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.626228 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-utilities\") pod \"redhat-operators-smmpp\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.656741 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5smf\" (UniqueName: \"kubernetes.io/projected/7bf0d30f-86d8-4106-b1c6-34cecb11471e-kube-api-access-z5smf\") pod \"redhat-operators-smmpp\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:35 crc kubenswrapper[4803]: I0127 22:35:35.708121 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:36 crc kubenswrapper[4803]: I0127 22:35:36.218423 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-smmpp"] Jan 27 22:35:36 crc kubenswrapper[4803]: I0127 22:35:36.467035 4803 generic.go:334] "Generic (PLEG): container finished" podID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerID="6c88705b224b919dbaa710d2dbb1e504d33506554be1b03423c369efa12e9a5b" exitCode=0 Jan 27 22:35:36 crc kubenswrapper[4803]: I0127 22:35:36.467089 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmpp" event={"ID":"7bf0d30f-86d8-4106-b1c6-34cecb11471e","Type":"ContainerDied","Data":"6c88705b224b919dbaa710d2dbb1e504d33506554be1b03423c369efa12e9a5b"} Jan 27 22:35:36 crc kubenswrapper[4803]: I0127 22:35:36.467134 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmpp" event={"ID":"7bf0d30f-86d8-4106-b1c6-34cecb11471e","Type":"ContainerStarted","Data":"6f12f14999f134154d8024640a1b1f9824b697fa48d896a6f6079113c7865f8f"} Jan 27 22:35:38 crc kubenswrapper[4803]: I0127 22:35:38.488373 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmpp" event={"ID":"7bf0d30f-86d8-4106-b1c6-34cecb11471e","Type":"ContainerStarted","Data":"1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459"} Jan 27 22:35:44 crc kubenswrapper[4803]: I0127 22:35:44.560696 4803 generic.go:334] "Generic (PLEG): container finished" podID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerID="1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459" exitCode=0 Jan 27 22:35:44 crc kubenswrapper[4803]: I0127 22:35:44.560806 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmpp" event={"ID":"7bf0d30f-86d8-4106-b1c6-34cecb11471e","Type":"ContainerDied","Data":"1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459"} Jan 27 22:35:45 crc kubenswrapper[4803]: I0127 22:35:45.575557 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmpp" event={"ID":"7bf0d30f-86d8-4106-b1c6-34cecb11471e","Type":"ContainerStarted","Data":"a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86"} Jan 27 22:35:45 crc kubenswrapper[4803]: I0127 22:35:45.609880 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-smmpp" podStartSLOduration=2.085517282 podStartE2EDuration="10.609818672s" podCreationTimestamp="2026-01-27 22:35:35 +0000 UTC" firstStartedPulling="2026-01-27 22:35:36.469077668 +0000 UTC m=+2888.885099357" lastFinishedPulling="2026-01-27 22:35:44.993379048 +0000 UTC m=+2897.409400747" observedRunningTime="2026-01-27 22:35:45.60026183 +0000 UTC m=+2898.016283539" watchObservedRunningTime="2026-01-27 22:35:45.609818672 +0000 UTC m=+2898.025840401" Jan 27 22:35:45 crc kubenswrapper[4803]: I0127 22:35:45.708899 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:45 crc kubenswrapper[4803]: I0127 22:35:45.709027 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:46 crc kubenswrapper[4803]: I0127 22:35:46.343212 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:35:46 crc kubenswrapper[4803]: I0127 22:35:46.343505 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:35:46 crc kubenswrapper[4803]: I0127 22:35:46.343548 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:35:46 crc kubenswrapper[4803]: I0127 22:35:46.344565 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:35:46 crc kubenswrapper[4803]: I0127 22:35:46.344624 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" gracePeriod=600 Jan 27 22:35:46 crc kubenswrapper[4803]: E0127 22:35:46.464998 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:35:46 crc kubenswrapper[4803]: I0127 22:35:46.605035 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" exitCode=0 Jan 27 22:35:46 crc kubenswrapper[4803]: I0127 22:35:46.605104 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b"} Jan 27 22:35:46 crc kubenswrapper[4803]: I0127 22:35:46.605173 4803 scope.go:117] "RemoveContainer" containerID="aad6d684a8fe4b6e35ed33e9eec548144cbbc49c598e8df03d796cb382eedc86" Jan 27 22:35:46 crc kubenswrapper[4803]: I0127 22:35:46.606637 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:35:46 crc kubenswrapper[4803]: E0127 22:35:46.606960 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:35:46 crc kubenswrapper[4803]: I0127 22:35:46.769557 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-smmpp" podUID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerName="registry-server" probeResult="failure" output=< Jan 27 22:35:46 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:35:46 crc kubenswrapper[4803]: > Jan 27 22:35:55 crc kubenswrapper[4803]: I0127 22:35:55.758678 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:55 crc kubenswrapper[4803]: I0127 22:35:55.813322 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:56 crc kubenswrapper[4803]: I0127 22:35:56.001680 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-smmpp"] Jan 27 22:35:57 crc kubenswrapper[4803]: I0127 22:35:57.714383 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-smmpp" podUID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerName="registry-server" containerID="cri-o://a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86" gracePeriod=2 Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.313100 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.437148 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-utilities\") pod \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.437418 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5smf\" (UniqueName: \"kubernetes.io/projected/7bf0d30f-86d8-4106-b1c6-34cecb11471e-kube-api-access-z5smf\") pod \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.437634 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-catalog-content\") pod \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\" (UID: \"7bf0d30f-86d8-4106-b1c6-34cecb11471e\") " Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.438103 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-utilities" (OuterVolumeSpecName: "utilities") pod "7bf0d30f-86d8-4106-b1c6-34cecb11471e" (UID: "7bf0d30f-86d8-4106-b1c6-34cecb11471e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.438528 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.444508 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bf0d30f-86d8-4106-b1c6-34cecb11471e-kube-api-access-z5smf" (OuterVolumeSpecName: "kube-api-access-z5smf") pod "7bf0d30f-86d8-4106-b1c6-34cecb11471e" (UID: "7bf0d30f-86d8-4106-b1c6-34cecb11471e"). InnerVolumeSpecName "kube-api-access-z5smf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.541156 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5smf\" (UniqueName: \"kubernetes.io/projected/7bf0d30f-86d8-4106-b1c6-34cecb11471e-kube-api-access-z5smf\") on node \"crc\" DevicePath \"\"" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.552284 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bf0d30f-86d8-4106-b1c6-34cecb11471e" (UID: "7bf0d30f-86d8-4106-b1c6-34cecb11471e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.643918 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bf0d30f-86d8-4106-b1c6-34cecb11471e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.726408 4803 generic.go:334] "Generic (PLEG): container finished" podID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerID="a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86" exitCode=0 Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.726480 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smmpp" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.726481 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmpp" event={"ID":"7bf0d30f-86d8-4106-b1c6-34cecb11471e","Type":"ContainerDied","Data":"a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86"} Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.726526 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmpp" event={"ID":"7bf0d30f-86d8-4106-b1c6-34cecb11471e","Type":"ContainerDied","Data":"6f12f14999f134154d8024640a1b1f9824b697fa48d896a6f6079113c7865f8f"} Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.726549 4803 scope.go:117] "RemoveContainer" containerID="a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.774450 4803 scope.go:117] "RemoveContainer" containerID="1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.786472 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-smmpp"] Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.802073 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-smmpp"] Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.812252 4803 scope.go:117] "RemoveContainer" containerID="6c88705b224b919dbaa710d2dbb1e504d33506554be1b03423c369efa12e9a5b" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.853820 4803 scope.go:117] "RemoveContainer" containerID="a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86" Jan 27 22:35:58 crc kubenswrapper[4803]: E0127 22:35:58.854514 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86\": container with ID starting with a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86 not found: ID does not exist" containerID="a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.854547 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86"} err="failed to get container status \"a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86\": rpc error: code = NotFound desc = could not find container \"a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86\": container with ID starting with a4d7f6efa71ae9be0cb7ffc82c412989f90ec7a0ed9418110e62f3ddcaddad86 not found: ID does not exist" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.854566 4803 scope.go:117] "RemoveContainer" containerID="1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459" Jan 27 22:35:58 crc kubenswrapper[4803]: E0127 22:35:58.854954 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459\": container with ID starting with 1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459 not found: ID does not exist" containerID="1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.855001 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459"} err="failed to get container status \"1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459\": rpc error: code = NotFound desc = could not find container \"1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459\": container with ID starting with 1842514b50eeac86551070ab87bd316640a3e7481f73bcfbfb80e54994012459 not found: ID does not exist" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.855192 4803 scope.go:117] "RemoveContainer" containerID="6c88705b224b919dbaa710d2dbb1e504d33506554be1b03423c369efa12e9a5b" Jan 27 22:35:58 crc kubenswrapper[4803]: E0127 22:35:58.855443 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c88705b224b919dbaa710d2dbb1e504d33506554be1b03423c369efa12e9a5b\": container with ID starting with 6c88705b224b919dbaa710d2dbb1e504d33506554be1b03423c369efa12e9a5b not found: ID does not exist" containerID="6c88705b224b919dbaa710d2dbb1e504d33506554be1b03423c369efa12e9a5b" Jan 27 22:35:58 crc kubenswrapper[4803]: I0127 22:35:58.855467 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c88705b224b919dbaa710d2dbb1e504d33506554be1b03423c369efa12e9a5b"} err="failed to get container status \"6c88705b224b919dbaa710d2dbb1e504d33506554be1b03423c369efa12e9a5b\": rpc error: code = NotFound desc = could not find container \"6c88705b224b919dbaa710d2dbb1e504d33506554be1b03423c369efa12e9a5b\": container with ID starting with 6c88705b224b919dbaa710d2dbb1e504d33506554be1b03423c369efa12e9a5b not found: ID does not exist" Jan 27 22:35:59 crc kubenswrapper[4803]: I0127 22:35:59.307549 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:35:59 crc kubenswrapper[4803]: E0127 22:35:59.308421 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:36:00 crc kubenswrapper[4803]: I0127 22:36:00.323374 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" path="/var/lib/kubelet/pods/7bf0d30f-86d8-4106-b1c6-34cecb11471e/volumes" Jan 27 22:36:14 crc kubenswrapper[4803]: I0127 22:36:14.307124 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:36:14 crc kubenswrapper[4803]: E0127 22:36:14.308006 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:36:29 crc kubenswrapper[4803]: I0127 22:36:29.307067 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:36:29 crc kubenswrapper[4803]: E0127 22:36:29.308171 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:36:44 crc kubenswrapper[4803]: I0127 22:36:44.307602 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:36:44 crc kubenswrapper[4803]: E0127 22:36:44.308406 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:36:55 crc kubenswrapper[4803]: I0127 22:36:55.308100 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:36:55 crc kubenswrapper[4803]: E0127 22:36:55.309527 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:37:10 crc kubenswrapper[4803]: I0127 22:37:10.307048 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:37:10 crc kubenswrapper[4803]: E0127 22:37:10.308838 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:37:10 crc kubenswrapper[4803]: I0127 22:37:10.513074 4803 generic.go:334] "Generic (PLEG): container finished" podID="a9b994a1-9306-48ee-a202-62a8506f2f15" containerID="c8b3bedbbe65685d673ad4e97c28399df1d6a1ad0c786c2dab969329e3b645cd" exitCode=0 Jan 27 22:37:10 crc kubenswrapper[4803]: I0127 22:37:10.513086 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" event={"ID":"a9b994a1-9306-48ee-a202-62a8506f2f15","Type":"ContainerDied","Data":"c8b3bedbbe65685d673ad4e97c28399df1d6a1ad0c786c2dab969329e3b645cd"} Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.023196 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.057206 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bfh6\" (UniqueName: \"kubernetes.io/projected/a9b994a1-9306-48ee-a202-62a8506f2f15-kube-api-access-4bfh6\") pod \"a9b994a1-9306-48ee-a202-62a8506f2f15\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.057490 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ssh-key-openstack-edpm-ipam\") pod \"a9b994a1-9306-48ee-a202-62a8506f2f15\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.057750 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-0\") pod \"a9b994a1-9306-48ee-a202-62a8506f2f15\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.057883 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-2\") pod \"a9b994a1-9306-48ee-a202-62a8506f2f15\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.058045 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-1\") pod \"a9b994a1-9306-48ee-a202-62a8506f2f15\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.058172 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-inventory\") pod \"a9b994a1-9306-48ee-a202-62a8506f2f15\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.058417 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-telemetry-combined-ca-bundle\") pod \"a9b994a1-9306-48ee-a202-62a8506f2f15\" (UID: \"a9b994a1-9306-48ee-a202-62a8506f2f15\") " Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.066403 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a9b994a1-9306-48ee-a202-62a8506f2f15" (UID: "a9b994a1-9306-48ee-a202-62a8506f2f15"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.066424 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b994a1-9306-48ee-a202-62a8506f2f15-kube-api-access-4bfh6" (OuterVolumeSpecName: "kube-api-access-4bfh6") pod "a9b994a1-9306-48ee-a202-62a8506f2f15" (UID: "a9b994a1-9306-48ee-a202-62a8506f2f15"). InnerVolumeSpecName "kube-api-access-4bfh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.092883 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "a9b994a1-9306-48ee-a202-62a8506f2f15" (UID: "a9b994a1-9306-48ee-a202-62a8506f2f15"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.110119 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "a9b994a1-9306-48ee-a202-62a8506f2f15" (UID: "a9b994a1-9306-48ee-a202-62a8506f2f15"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.110521 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "a9b994a1-9306-48ee-a202-62a8506f2f15" (UID: "a9b994a1-9306-48ee-a202-62a8506f2f15"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.112205 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a9b994a1-9306-48ee-a202-62a8506f2f15" (UID: "a9b994a1-9306-48ee-a202-62a8506f2f15"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.121941 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-inventory" (OuterVolumeSpecName: "inventory") pod "a9b994a1-9306-48ee-a202-62a8506f2f15" (UID: "a9b994a1-9306-48ee-a202-62a8506f2f15"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.161415 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bfh6\" (UniqueName: \"kubernetes.io/projected/a9b994a1-9306-48ee-a202-62a8506f2f15-kube-api-access-4bfh6\") on node \"crc\" DevicePath \"\"" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.161480 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.161492 4803 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.161504 4803 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.161514 4803 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.161524 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.161533 4803 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b994a1-9306-48ee-a202-62a8506f2f15-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.533942 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" event={"ID":"a9b994a1-9306-48ee-a202-62a8506f2f15","Type":"ContainerDied","Data":"cb249a22ce50c59d9f093024ae4023b0cddf3494670e9b8f9d3bae72b4276b91"} Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.534297 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb249a22ce50c59d9f093024ae4023b0cddf3494670e9b8f9d3bae72b4276b91" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.534015 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-plgl7" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.674253 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc"] Jan 27 22:37:12 crc kubenswrapper[4803]: E0127 22:37:12.675984 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerName="registry-server" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.676073 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerName="registry-server" Jan 27 22:37:12 crc kubenswrapper[4803]: E0127 22:37:12.676145 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b994a1-9306-48ee-a202-62a8506f2f15" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.676157 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b994a1-9306-48ee-a202-62a8506f2f15" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 22:37:12 crc kubenswrapper[4803]: E0127 22:37:12.676180 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerName="extract-utilities" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.676187 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerName="extract-utilities" Jan 27 22:37:12 crc kubenswrapper[4803]: E0127 22:37:12.676246 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerName="extract-content" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.676254 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerName="extract-content" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.676873 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b994a1-9306-48ee-a202-62a8506f2f15" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.676923 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bf0d30f-86d8-4106-b1c6-34cecb11471e" containerName="registry-server" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.678644 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.697519 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc"] Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.699046 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.699572 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.700955 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.701246 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.701527 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.779280 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.779321 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.779488 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.779520 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2nv\" (UniqueName: \"kubernetes.io/projected/7b71eaf1-b828-42a9-8fae-452c3d2f628e-kube-api-access-bt2nv\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.779563 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.779585 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.779618 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.881317 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.881384 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt2nv\" (UniqueName: \"kubernetes.io/projected/7b71eaf1-b828-42a9-8fae-452c3d2f628e-kube-api-access-bt2nv\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.881440 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.881466 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.881516 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.881637 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.881666 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.886620 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.886702 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.886793 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.887122 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.887686 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.888321 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:12 crc kubenswrapper[4803]: I0127 22:37:12.900433 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt2nv\" (UniqueName: \"kubernetes.io/projected/7b71eaf1-b828-42a9-8fae-452c3d2f628e-kube-api-access-bt2nv\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:13 crc kubenswrapper[4803]: I0127 22:37:13.029502 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:37:13 crc kubenswrapper[4803]: I0127 22:37:13.632288 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc"] Jan 27 22:37:14 crc kubenswrapper[4803]: I0127 22:37:14.559875 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" event={"ID":"7b71eaf1-b828-42a9-8fae-452c3d2f628e","Type":"ContainerStarted","Data":"8bbb08239bcfa62100567df72f4a5d564ca0ef1ad66525f4e86e62a3c4bdf50a"} Jan 27 22:37:14 crc kubenswrapper[4803]: I0127 22:37:14.560283 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" event={"ID":"7b71eaf1-b828-42a9-8fae-452c3d2f628e","Type":"ContainerStarted","Data":"fc7d1b736d4ece5aefa823cc6dd6ab41e3831487e5ce44663e01d0f5aafa5e8c"} Jan 27 22:37:14 crc kubenswrapper[4803]: I0127 22:37:14.582328 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" podStartSLOduration=2.121602855 podStartE2EDuration="2.582306113s" podCreationTimestamp="2026-01-27 22:37:12 +0000 UTC" firstStartedPulling="2026-01-27 22:37:13.652950159 +0000 UTC m=+2986.068971858" lastFinishedPulling="2026-01-27 22:37:14.113653417 +0000 UTC m=+2986.529675116" observedRunningTime="2026-01-27 22:37:14.581468821 +0000 UTC m=+2986.997490540" watchObservedRunningTime="2026-01-27 22:37:14.582306113 +0000 UTC m=+2986.998327832" Jan 27 22:37:24 crc kubenswrapper[4803]: I0127 22:37:24.307242 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:37:24 crc kubenswrapper[4803]: E0127 22:37:24.307912 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:37:36 crc kubenswrapper[4803]: I0127 22:37:36.307125 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:37:36 crc kubenswrapper[4803]: E0127 22:37:36.307979 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:37:51 crc kubenswrapper[4803]: I0127 22:37:51.307420 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:37:51 crc kubenswrapper[4803]: E0127 22:37:51.308436 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:38:02 crc kubenswrapper[4803]: I0127 22:38:02.307664 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:38:02 crc kubenswrapper[4803]: E0127 22:38:02.308616 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:38:16 crc kubenswrapper[4803]: I0127 22:38:16.307994 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:38:16 crc kubenswrapper[4803]: E0127 22:38:16.308924 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:38:29 crc kubenswrapper[4803]: I0127 22:38:29.306814 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:38:29 crc kubenswrapper[4803]: E0127 22:38:29.307784 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:38:43 crc kubenswrapper[4803]: I0127 22:38:43.306873 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:38:43 crc kubenswrapper[4803]: E0127 22:38:43.307822 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:38:56 crc kubenswrapper[4803]: I0127 22:38:56.307563 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:38:56 crc kubenswrapper[4803]: E0127 22:38:56.308268 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:39:10 crc kubenswrapper[4803]: I0127 22:39:10.306644 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:39:10 crc kubenswrapper[4803]: E0127 22:39:10.308817 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:39:11 crc kubenswrapper[4803]: I0127 22:39:11.758968 4803 generic.go:334] "Generic (PLEG): container finished" podID="7b71eaf1-b828-42a9-8fae-452c3d2f628e" containerID="8bbb08239bcfa62100567df72f4a5d564ca0ef1ad66525f4e86e62a3c4bdf50a" exitCode=0 Jan 27 22:39:11 crc kubenswrapper[4803]: I0127 22:39:11.759034 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" event={"ID":"7b71eaf1-b828-42a9-8fae-452c3d2f628e","Type":"ContainerDied","Data":"8bbb08239bcfa62100567df72f4a5d564ca0ef1ad66525f4e86e62a3c4bdf50a"} Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.248368 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.288682 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-0\") pod \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.288802 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-2\") pod \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.288933 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-telemetry-power-monitoring-combined-ca-bundle\") pod \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.289050 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-1\") pod \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.289086 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ssh-key-openstack-edpm-ipam\") pod \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.289145 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt2nv\" (UniqueName: \"kubernetes.io/projected/7b71eaf1-b828-42a9-8fae-452c3d2f628e-kube-api-access-bt2nv\") pod \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.289209 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-inventory\") pod \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\" (UID: \"7b71eaf1-b828-42a9-8fae-452c3d2f628e\") " Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.296985 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "7b71eaf1-b828-42a9-8fae-452c3d2f628e" (UID: "7b71eaf1-b828-42a9-8fae-452c3d2f628e"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.297318 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b71eaf1-b828-42a9-8fae-452c3d2f628e-kube-api-access-bt2nv" (OuterVolumeSpecName: "kube-api-access-bt2nv") pod "7b71eaf1-b828-42a9-8fae-452c3d2f628e" (UID: "7b71eaf1-b828-42a9-8fae-452c3d2f628e"). InnerVolumeSpecName "kube-api-access-bt2nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.323327 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "7b71eaf1-b828-42a9-8fae-452c3d2f628e" (UID: "7b71eaf1-b828-42a9-8fae-452c3d2f628e"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.323574 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7b71eaf1-b828-42a9-8fae-452c3d2f628e" (UID: "7b71eaf1-b828-42a9-8fae-452c3d2f628e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.336677 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-inventory" (OuterVolumeSpecName: "inventory") pod "7b71eaf1-b828-42a9-8fae-452c3d2f628e" (UID: "7b71eaf1-b828-42a9-8fae-452c3d2f628e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.337314 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "7b71eaf1-b828-42a9-8fae-452c3d2f628e" (UID: "7b71eaf1-b828-42a9-8fae-452c3d2f628e"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.339582 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "7b71eaf1-b828-42a9-8fae-452c3d2f628e" (UID: "7b71eaf1-b828-42a9-8fae-452c3d2f628e"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.393050 4803 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.393096 4803 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.393108 4803 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.393122 4803 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.393132 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.393142 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt2nv\" (UniqueName: \"kubernetes.io/projected/7b71eaf1-b828-42a9-8fae-452c3d2f628e-kube-api-access-bt2nv\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.393150 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b71eaf1-b828-42a9-8fae-452c3d2f628e-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.778960 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" event={"ID":"7b71eaf1-b828-42a9-8fae-452c3d2f628e","Type":"ContainerDied","Data":"fc7d1b736d4ece5aefa823cc6dd6ab41e3831487e5ce44663e01d0f5aafa5e8c"} Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.779001 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.779006 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc7d1b736d4ece5aefa823cc6dd6ab41e3831487e5ce44663e01d0f5aafa5e8c" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.881050 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq"] Jan 27 22:39:13 crc kubenswrapper[4803]: E0127 22:39:13.881499 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b71eaf1-b828-42a9-8fae-452c3d2f628e" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.881517 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b71eaf1-b828-42a9-8fae-452c3d2f628e" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.881750 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b71eaf1-b828-42a9-8fae-452c3d2f628e" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.882564 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.888914 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.889302 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2fl9z" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.889583 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.891033 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.891819 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 22:39:13 crc kubenswrapper[4803]: I0127 22:39:13.896257 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq"] Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.008884 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.009209 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.009466 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.009576 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.009788 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4vcf\" (UniqueName: \"kubernetes.io/projected/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-kube-api-access-c4vcf\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.112301 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.112424 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.112477 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.112720 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4vcf\" (UniqueName: \"kubernetes.io/projected/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-kube-api-access-c4vcf\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.112753 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.117278 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.117582 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.117874 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.118347 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.137516 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4vcf\" (UniqueName: \"kubernetes.io/projected/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-kube-api-access-c4vcf\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tqmbq\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.202250 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.765301 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq"] Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.770459 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 22:39:14 crc kubenswrapper[4803]: I0127 22:39:14.791439 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" event={"ID":"035bdbf8-512b-42d2-ab7f-fd357ea4fa98","Type":"ContainerStarted","Data":"629f7752ebb37157ee8d4f361a1a66a937bcf20ec2492111ab2a6ce98feb9845"} Jan 27 22:39:15 crc kubenswrapper[4803]: I0127 22:39:15.801834 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" event={"ID":"035bdbf8-512b-42d2-ab7f-fd357ea4fa98","Type":"ContainerStarted","Data":"4991a9cbeca9c87995ef9b94fbe8221230a7bfca9af6615046022d8cadef3873"} Jan 27 22:39:15 crc kubenswrapper[4803]: I0127 22:39:15.825530 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" podStartSLOduration=2.366809005 podStartE2EDuration="2.82550988s" podCreationTimestamp="2026-01-27 22:39:13 +0000 UTC" firstStartedPulling="2026-01-27 22:39:14.770056112 +0000 UTC m=+3107.186077811" lastFinishedPulling="2026-01-27 22:39:15.228756987 +0000 UTC m=+3107.644778686" observedRunningTime="2026-01-27 22:39:15.81560844 +0000 UTC m=+3108.231630139" watchObservedRunningTime="2026-01-27 22:39:15.82550988 +0000 UTC m=+3108.241531589" Jan 27 22:39:23 crc kubenswrapper[4803]: I0127 22:39:23.307367 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:39:23 crc kubenswrapper[4803]: E0127 22:39:23.309254 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:39:30 crc kubenswrapper[4803]: I0127 22:39:30.951723 4803 generic.go:334] "Generic (PLEG): container finished" podID="035bdbf8-512b-42d2-ab7f-fd357ea4fa98" containerID="4991a9cbeca9c87995ef9b94fbe8221230a7bfca9af6615046022d8cadef3873" exitCode=0 Jan 27 22:39:30 crc kubenswrapper[4803]: I0127 22:39:30.951803 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" event={"ID":"035bdbf8-512b-42d2-ab7f-fd357ea4fa98","Type":"ContainerDied","Data":"4991a9cbeca9c87995ef9b94fbe8221230a7bfca9af6615046022d8cadef3873"} Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.404255 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.464141 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-inventory\") pod \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.464467 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-ssh-key-openstack-edpm-ipam\") pod \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.464508 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-1\") pod \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.464700 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4vcf\" (UniqueName: \"kubernetes.io/projected/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-kube-api-access-c4vcf\") pod \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.464760 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-0\") pod \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\" (UID: \"035bdbf8-512b-42d2-ab7f-fd357ea4fa98\") " Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.470393 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-kube-api-access-c4vcf" (OuterVolumeSpecName: "kube-api-access-c4vcf") pod "035bdbf8-512b-42d2-ab7f-fd357ea4fa98" (UID: "035bdbf8-512b-42d2-ab7f-fd357ea4fa98"). InnerVolumeSpecName "kube-api-access-c4vcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.496475 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-inventory" (OuterVolumeSpecName: "inventory") pod "035bdbf8-512b-42d2-ab7f-fd357ea4fa98" (UID: "035bdbf8-512b-42d2-ab7f-fd357ea4fa98"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.497337 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "035bdbf8-512b-42d2-ab7f-fd357ea4fa98" (UID: "035bdbf8-512b-42d2-ab7f-fd357ea4fa98"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.497948 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "035bdbf8-512b-42d2-ab7f-fd357ea4fa98" (UID: "035bdbf8-512b-42d2-ab7f-fd357ea4fa98"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.500763 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "035bdbf8-512b-42d2-ab7f-fd357ea4fa98" (UID: "035bdbf8-512b-42d2-ab7f-fd357ea4fa98"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.568328 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4vcf\" (UniqueName: \"kubernetes.io/projected/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-kube-api-access-c4vcf\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.568385 4803 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.568406 4803 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.568426 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.568446 4803 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/035bdbf8-512b-42d2-ab7f-fd357ea4fa98-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.972182 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" event={"ID":"035bdbf8-512b-42d2-ab7f-fd357ea4fa98","Type":"ContainerDied","Data":"629f7752ebb37157ee8d4f361a1a66a937bcf20ec2492111ab2a6ce98feb9845"} Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.972220 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="629f7752ebb37157ee8d4f361a1a66a937bcf20ec2492111ab2a6ce98feb9845" Jan 27 22:39:32 crc kubenswrapper[4803]: I0127 22:39:32.972273 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tqmbq" Jan 27 22:39:35 crc kubenswrapper[4803]: I0127 22:39:35.308777 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:39:35 crc kubenswrapper[4803]: E0127 22:39:35.310316 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.646471 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k586t"] Jan 27 22:39:46 crc kubenswrapper[4803]: E0127 22:39:46.647931 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035bdbf8-512b-42d2-ab7f-fd357ea4fa98" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.647956 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="035bdbf8-512b-42d2-ab7f-fd357ea4fa98" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.648318 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="035bdbf8-512b-42d2-ab7f-fd357ea4fa98" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.651142 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.662069 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k586t"] Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.803603 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-utilities\") pod \"redhat-marketplace-k586t\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.803726 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-catalog-content\") pod \"redhat-marketplace-k586t\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.804028 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng29q\" (UniqueName: \"kubernetes.io/projected/8c810905-c1c5-43c4-a774-de12c4d1ed59-kube-api-access-ng29q\") pod \"redhat-marketplace-k586t\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.906441 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-utilities\") pod \"redhat-marketplace-k586t\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.906547 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-catalog-content\") pod \"redhat-marketplace-k586t\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.906739 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng29q\" (UniqueName: \"kubernetes.io/projected/8c810905-c1c5-43c4-a774-de12c4d1ed59-kube-api-access-ng29q\") pod \"redhat-marketplace-k586t\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.907048 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-utilities\") pod \"redhat-marketplace-k586t\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.907105 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-catalog-content\") pod \"redhat-marketplace-k586t\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.929889 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng29q\" (UniqueName: \"kubernetes.io/projected/8c810905-c1c5-43c4-a774-de12c4d1ed59-kube-api-access-ng29q\") pod \"redhat-marketplace-k586t\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:46 crc kubenswrapper[4803]: I0127 22:39:46.981434 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:47 crc kubenswrapper[4803]: I0127 22:39:47.543085 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k586t"] Jan 27 22:39:48 crc kubenswrapper[4803]: I0127 22:39:48.118444 4803 generic.go:334] "Generic (PLEG): container finished" podID="8c810905-c1c5-43c4-a774-de12c4d1ed59" containerID="3003e50737d3edc4bad6fee53e7306c63d3b27e4773fb3fd12a724e2ba69c080" exitCode=0 Jan 27 22:39:48 crc kubenswrapper[4803]: I0127 22:39:48.118503 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k586t" event={"ID":"8c810905-c1c5-43c4-a774-de12c4d1ed59","Type":"ContainerDied","Data":"3003e50737d3edc4bad6fee53e7306c63d3b27e4773fb3fd12a724e2ba69c080"} Jan 27 22:39:48 crc kubenswrapper[4803]: I0127 22:39:48.118895 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k586t" event={"ID":"8c810905-c1c5-43c4-a774-de12c4d1ed59","Type":"ContainerStarted","Data":"41ca80db9dce083bb36160fc5fd6657c25174c993bf057f65d2ac17839de6c75"} Jan 27 22:39:49 crc kubenswrapper[4803]: I0127 22:39:49.130652 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k586t" event={"ID":"8c810905-c1c5-43c4-a774-de12c4d1ed59","Type":"ContainerStarted","Data":"f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e"} Jan 27 22:39:49 crc kubenswrapper[4803]: I0127 22:39:49.306877 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:39:49 crc kubenswrapper[4803]: E0127 22:39:49.316143 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.148668 4803 generic.go:334] "Generic (PLEG): container finished" podID="8c810905-c1c5-43c4-a774-de12c4d1ed59" containerID="f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e" exitCode=0 Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.148752 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k586t" event={"ID":"8c810905-c1c5-43c4-a774-de12c4d1ed59","Type":"ContainerDied","Data":"f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e"} Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.625791 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-64bl5"] Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.629043 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.648879 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-64bl5"] Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.805937 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n48b\" (UniqueName: \"kubernetes.io/projected/e3028236-a937-4b01-a16d-3df28e5ebc3d-kube-api-access-9n48b\") pod \"community-operators-64bl5\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.806485 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-utilities\") pod \"community-operators-64bl5\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.806568 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-catalog-content\") pod \"community-operators-64bl5\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.908578 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-utilities\") pod \"community-operators-64bl5\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.908643 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-catalog-content\") pod \"community-operators-64bl5\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.908751 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n48b\" (UniqueName: \"kubernetes.io/projected/e3028236-a937-4b01-a16d-3df28e5ebc3d-kube-api-access-9n48b\") pod \"community-operators-64bl5\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.909197 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-utilities\") pod \"community-operators-64bl5\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.909210 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-catalog-content\") pod \"community-operators-64bl5\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:39:50 crc kubenswrapper[4803]: I0127 22:39:50.931542 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n48b\" (UniqueName: \"kubernetes.io/projected/e3028236-a937-4b01-a16d-3df28e5ebc3d-kube-api-access-9n48b\") pod \"community-operators-64bl5\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:39:51 crc kubenswrapper[4803]: I0127 22:39:51.041699 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:39:51 crc kubenswrapper[4803]: I0127 22:39:51.179652 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k586t" event={"ID":"8c810905-c1c5-43c4-a774-de12c4d1ed59","Type":"ContainerStarted","Data":"8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f"} Jan 27 22:39:51 crc kubenswrapper[4803]: I0127 22:39:51.213103 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k586t" podStartSLOduration=2.760195923 podStartE2EDuration="5.213077908s" podCreationTimestamp="2026-01-27 22:39:46 +0000 UTC" firstStartedPulling="2026-01-27 22:39:48.12203975 +0000 UTC m=+3140.538061459" lastFinishedPulling="2026-01-27 22:39:50.574921745 +0000 UTC m=+3142.990943444" observedRunningTime="2026-01-27 22:39:51.208832213 +0000 UTC m=+3143.624853942" watchObservedRunningTime="2026-01-27 22:39:51.213077908 +0000 UTC m=+3143.629099607" Jan 27 22:39:51 crc kubenswrapper[4803]: I0127 22:39:51.594171 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-64bl5"] Jan 27 22:39:52 crc kubenswrapper[4803]: I0127 22:39:52.194636 4803 generic.go:334] "Generic (PLEG): container finished" podID="e3028236-a937-4b01-a16d-3df28e5ebc3d" containerID="ebd0a91c6ff8db4d21a29e51f353616a5bcf56ab89a6dcc87bb7bac450db9274" exitCode=0 Jan 27 22:39:52 crc kubenswrapper[4803]: I0127 22:39:52.197006 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64bl5" event={"ID":"e3028236-a937-4b01-a16d-3df28e5ebc3d","Type":"ContainerDied","Data":"ebd0a91c6ff8db4d21a29e51f353616a5bcf56ab89a6dcc87bb7bac450db9274"} Jan 27 22:39:52 crc kubenswrapper[4803]: I0127 22:39:52.197092 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64bl5" event={"ID":"e3028236-a937-4b01-a16d-3df28e5ebc3d","Type":"ContainerStarted","Data":"02454785a29835958a3f57fb5bbd439d70028757735c482c4ed81b28a074635e"} Jan 27 22:39:53 crc kubenswrapper[4803]: I0127 22:39:53.206656 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64bl5" event={"ID":"e3028236-a937-4b01-a16d-3df28e5ebc3d","Type":"ContainerStarted","Data":"f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b"} Jan 27 22:39:55 crc kubenswrapper[4803]: I0127 22:39:55.228880 4803 generic.go:334] "Generic (PLEG): container finished" podID="e3028236-a937-4b01-a16d-3df28e5ebc3d" containerID="f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b" exitCode=0 Jan 27 22:39:55 crc kubenswrapper[4803]: I0127 22:39:55.229431 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64bl5" event={"ID":"e3028236-a937-4b01-a16d-3df28e5ebc3d","Type":"ContainerDied","Data":"f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b"} Jan 27 22:39:56 crc kubenswrapper[4803]: I0127 22:39:56.243686 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64bl5" event={"ID":"e3028236-a937-4b01-a16d-3df28e5ebc3d","Type":"ContainerStarted","Data":"3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c"} Jan 27 22:39:56 crc kubenswrapper[4803]: I0127 22:39:56.273581 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-64bl5" podStartSLOduration=2.77333719 podStartE2EDuration="6.27350931s" podCreationTimestamp="2026-01-27 22:39:50 +0000 UTC" firstStartedPulling="2026-01-27 22:39:52.198622748 +0000 UTC m=+3144.614644447" lastFinishedPulling="2026-01-27 22:39:55.698794868 +0000 UTC m=+3148.114816567" observedRunningTime="2026-01-27 22:39:56.25923733 +0000 UTC m=+3148.675259049" watchObservedRunningTime="2026-01-27 22:39:56.27350931 +0000 UTC m=+3148.689531009" Jan 27 22:39:56 crc kubenswrapper[4803]: I0127 22:39:56.981609 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:56 crc kubenswrapper[4803]: I0127 22:39:56.981657 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:57 crc kubenswrapper[4803]: I0127 22:39:57.039328 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:57 crc kubenswrapper[4803]: I0127 22:39:57.303819 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:39:58 crc kubenswrapper[4803]: I0127 22:39:58.418551 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k586t"] Jan 27 22:39:59 crc kubenswrapper[4803]: I0127 22:39:59.284472 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k586t" podUID="8c810905-c1c5-43c4-a774-de12c4d1ed59" containerName="registry-server" containerID="cri-o://8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f" gracePeriod=2 Jan 27 22:39:59 crc kubenswrapper[4803]: I0127 22:39:59.880212 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.021872 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng29q\" (UniqueName: \"kubernetes.io/projected/8c810905-c1c5-43c4-a774-de12c4d1ed59-kube-api-access-ng29q\") pod \"8c810905-c1c5-43c4-a774-de12c4d1ed59\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.022241 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-utilities\") pod \"8c810905-c1c5-43c4-a774-de12c4d1ed59\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.022350 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-catalog-content\") pod \"8c810905-c1c5-43c4-a774-de12c4d1ed59\" (UID: \"8c810905-c1c5-43c4-a774-de12c4d1ed59\") " Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.023290 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-utilities" (OuterVolumeSpecName: "utilities") pod "8c810905-c1c5-43c4-a774-de12c4d1ed59" (UID: "8c810905-c1c5-43c4-a774-de12c4d1ed59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.027015 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c810905-c1c5-43c4-a774-de12c4d1ed59-kube-api-access-ng29q" (OuterVolumeSpecName: "kube-api-access-ng29q") pod "8c810905-c1c5-43c4-a774-de12c4d1ed59" (UID: "8c810905-c1c5-43c4-a774-de12c4d1ed59"). InnerVolumeSpecName "kube-api-access-ng29q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.043740 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c810905-c1c5-43c4-a774-de12c4d1ed59" (UID: "8c810905-c1c5-43c4-a774-de12c4d1ed59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.125068 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng29q\" (UniqueName: \"kubernetes.io/projected/8c810905-c1c5-43c4-a774-de12c4d1ed59-kube-api-access-ng29q\") on node \"crc\" DevicePath \"\"" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.125098 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.125111 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c810905-c1c5-43c4-a774-de12c4d1ed59-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.294968 4803 generic.go:334] "Generic (PLEG): container finished" podID="8c810905-c1c5-43c4-a774-de12c4d1ed59" containerID="8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f" exitCode=0 Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.295009 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k586t" event={"ID":"8c810905-c1c5-43c4-a774-de12c4d1ed59","Type":"ContainerDied","Data":"8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f"} Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.295034 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k586t" event={"ID":"8c810905-c1c5-43c4-a774-de12c4d1ed59","Type":"ContainerDied","Data":"41ca80db9dce083bb36160fc5fd6657c25174c993bf057f65d2ac17839de6c75"} Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.295051 4803 scope.go:117] "RemoveContainer" containerID="8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.295045 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k586t" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.315629 4803 scope.go:117] "RemoveContainer" containerID="f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.367828 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k586t"] Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.368564 4803 scope.go:117] "RemoveContainer" containerID="3003e50737d3edc4bad6fee53e7306c63d3b27e4773fb3fd12a724e2ba69c080" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.378603 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k586t"] Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.413075 4803 scope.go:117] "RemoveContainer" containerID="8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f" Jan 27 22:40:00 crc kubenswrapper[4803]: E0127 22:40:00.417038 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f\": container with ID starting with 8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f not found: ID does not exist" containerID="8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.417093 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f"} err="failed to get container status \"8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f\": rpc error: code = NotFound desc = could not find container \"8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f\": container with ID starting with 8078b68ccb4e138b50429a9b99ea39b20a77c937c20d248a5639bb0dbc5db54f not found: ID does not exist" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.417123 4803 scope.go:117] "RemoveContainer" containerID="f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e" Jan 27 22:40:00 crc kubenswrapper[4803]: E0127 22:40:00.421002 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e\": container with ID starting with f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e not found: ID does not exist" containerID="f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.421041 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e"} err="failed to get container status \"f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e\": rpc error: code = NotFound desc = could not find container \"f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e\": container with ID starting with f7551b199607f924a5bdd1d25da37efe9b0abbe7e3f4277cf56aab6ed9d2e16e not found: ID does not exist" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.421061 4803 scope.go:117] "RemoveContainer" containerID="3003e50737d3edc4bad6fee53e7306c63d3b27e4773fb3fd12a724e2ba69c080" Jan 27 22:40:00 crc kubenswrapper[4803]: E0127 22:40:00.421829 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3003e50737d3edc4bad6fee53e7306c63d3b27e4773fb3fd12a724e2ba69c080\": container with ID starting with 3003e50737d3edc4bad6fee53e7306c63d3b27e4773fb3fd12a724e2ba69c080 not found: ID does not exist" containerID="3003e50737d3edc4bad6fee53e7306c63d3b27e4773fb3fd12a724e2ba69c080" Jan 27 22:40:00 crc kubenswrapper[4803]: I0127 22:40:00.421869 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3003e50737d3edc4bad6fee53e7306c63d3b27e4773fb3fd12a724e2ba69c080"} err="failed to get container status \"3003e50737d3edc4bad6fee53e7306c63d3b27e4773fb3fd12a724e2ba69c080\": rpc error: code = NotFound desc = could not find container \"3003e50737d3edc4bad6fee53e7306c63d3b27e4773fb3fd12a724e2ba69c080\": container with ID starting with 3003e50737d3edc4bad6fee53e7306c63d3b27e4773fb3fd12a724e2ba69c080 not found: ID does not exist" Jan 27 22:40:01 crc kubenswrapper[4803]: I0127 22:40:01.042636 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:40:01 crc kubenswrapper[4803]: I0127 22:40:01.043016 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:40:01 crc kubenswrapper[4803]: I0127 22:40:01.090151 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:40:01 crc kubenswrapper[4803]: I0127 22:40:01.348395 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:40:02 crc kubenswrapper[4803]: I0127 22:40:02.323497 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c810905-c1c5-43c4-a774-de12c4d1ed59" path="/var/lib/kubelet/pods/8c810905-c1c5-43c4-a774-de12c4d1ed59/volumes" Jan 27 22:40:02 crc kubenswrapper[4803]: I0127 22:40:02.822490 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-64bl5"] Jan 27 22:40:03 crc kubenswrapper[4803]: I0127 22:40:03.307175 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:40:03 crc kubenswrapper[4803]: E0127 22:40:03.308004 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:40:03 crc kubenswrapper[4803]: I0127 22:40:03.333532 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-64bl5" podUID="e3028236-a937-4b01-a16d-3df28e5ebc3d" containerName="registry-server" containerID="cri-o://3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c" gracePeriod=2 Jan 27 22:40:03 crc kubenswrapper[4803]: I0127 22:40:03.868207 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:40:03 crc kubenswrapper[4803]: I0127 22:40:03.974220 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n48b\" (UniqueName: \"kubernetes.io/projected/e3028236-a937-4b01-a16d-3df28e5ebc3d-kube-api-access-9n48b\") pod \"e3028236-a937-4b01-a16d-3df28e5ebc3d\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " Jan 27 22:40:03 crc kubenswrapper[4803]: I0127 22:40:03.974461 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-catalog-content\") pod \"e3028236-a937-4b01-a16d-3df28e5ebc3d\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " Jan 27 22:40:03 crc kubenswrapper[4803]: I0127 22:40:03.975205 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-utilities\") pod \"e3028236-a937-4b01-a16d-3df28e5ebc3d\" (UID: \"e3028236-a937-4b01-a16d-3df28e5ebc3d\") " Jan 27 22:40:03 crc kubenswrapper[4803]: I0127 22:40:03.976027 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-utilities" (OuterVolumeSpecName: "utilities") pod "e3028236-a937-4b01-a16d-3df28e5ebc3d" (UID: "e3028236-a937-4b01-a16d-3df28e5ebc3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:40:03 crc kubenswrapper[4803]: I0127 22:40:03.976472 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:40:03 crc kubenswrapper[4803]: I0127 22:40:03.980027 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3028236-a937-4b01-a16d-3df28e5ebc3d-kube-api-access-9n48b" (OuterVolumeSpecName: "kube-api-access-9n48b") pod "e3028236-a937-4b01-a16d-3df28e5ebc3d" (UID: "e3028236-a937-4b01-a16d-3df28e5ebc3d"). InnerVolumeSpecName "kube-api-access-9n48b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.032756 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3028236-a937-4b01-a16d-3df28e5ebc3d" (UID: "e3028236-a937-4b01-a16d-3df28e5ebc3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.078998 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n48b\" (UniqueName: \"kubernetes.io/projected/e3028236-a937-4b01-a16d-3df28e5ebc3d-kube-api-access-9n48b\") on node \"crc\" DevicePath \"\"" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.079038 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3028236-a937-4b01-a16d-3df28e5ebc3d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.345204 4803 generic.go:334] "Generic (PLEG): container finished" podID="e3028236-a937-4b01-a16d-3df28e5ebc3d" containerID="3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c" exitCode=0 Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.345460 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64bl5" event={"ID":"e3028236-a937-4b01-a16d-3df28e5ebc3d","Type":"ContainerDied","Data":"3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c"} Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.345488 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64bl5" event={"ID":"e3028236-a937-4b01-a16d-3df28e5ebc3d","Type":"ContainerDied","Data":"02454785a29835958a3f57fb5bbd439d70028757735c482c4ed81b28a074635e"} Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.345505 4803 scope.go:117] "RemoveContainer" containerID="3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.345638 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64bl5" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.382116 4803 scope.go:117] "RemoveContainer" containerID="f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.384168 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-64bl5"] Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.394068 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-64bl5"] Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.413910 4803 scope.go:117] "RemoveContainer" containerID="ebd0a91c6ff8db4d21a29e51f353616a5bcf56ab89a6dcc87bb7bac450db9274" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.476620 4803 scope.go:117] "RemoveContainer" containerID="3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c" Jan 27 22:40:04 crc kubenswrapper[4803]: E0127 22:40:04.477253 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c\": container with ID starting with 3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c not found: ID does not exist" containerID="3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.477316 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c"} err="failed to get container status \"3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c\": rpc error: code = NotFound desc = could not find container \"3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c\": container with ID starting with 3c2f932b2d618c1bf0b11bbe763c316806a62009c057a186ac0629eed76b269c not found: ID does not exist" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.477344 4803 scope.go:117] "RemoveContainer" containerID="f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b" Jan 27 22:40:04 crc kubenswrapper[4803]: E0127 22:40:04.477874 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b\": container with ID starting with f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b not found: ID does not exist" containerID="f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.477916 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b"} err="failed to get container status \"f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b\": rpc error: code = NotFound desc = could not find container \"f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b\": container with ID starting with f0196834db10c171ad87066eb043a73c479e197af7f9b55d9293701096e9f94b not found: ID does not exist" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.477945 4803 scope.go:117] "RemoveContainer" containerID="ebd0a91c6ff8db4d21a29e51f353616a5bcf56ab89a6dcc87bb7bac450db9274" Jan 27 22:40:04 crc kubenswrapper[4803]: E0127 22:40:04.478353 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd0a91c6ff8db4d21a29e51f353616a5bcf56ab89a6dcc87bb7bac450db9274\": container with ID starting with ebd0a91c6ff8db4d21a29e51f353616a5bcf56ab89a6dcc87bb7bac450db9274 not found: ID does not exist" containerID="ebd0a91c6ff8db4d21a29e51f353616a5bcf56ab89a6dcc87bb7bac450db9274" Jan 27 22:40:04 crc kubenswrapper[4803]: I0127 22:40:04.478383 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd0a91c6ff8db4d21a29e51f353616a5bcf56ab89a6dcc87bb7bac450db9274"} err="failed to get container status \"ebd0a91c6ff8db4d21a29e51f353616a5bcf56ab89a6dcc87bb7bac450db9274\": rpc error: code = NotFound desc = could not find container \"ebd0a91c6ff8db4d21a29e51f353616a5bcf56ab89a6dcc87bb7bac450db9274\": container with ID starting with ebd0a91c6ff8db4d21a29e51f353616a5bcf56ab89a6dcc87bb7bac450db9274 not found: ID does not exist" Jan 27 22:40:06 crc kubenswrapper[4803]: I0127 22:40:06.321088 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3028236-a937-4b01-a16d-3df28e5ebc3d" path="/var/lib/kubelet/pods/e3028236-a937-4b01-a16d-3df28e5ebc3d/volumes" Jan 27 22:40:15 crc kubenswrapper[4803]: I0127 22:40:15.308568 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:40:15 crc kubenswrapper[4803]: E0127 22:40:15.309354 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:40:29 crc kubenswrapper[4803]: I0127 22:40:29.307923 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:40:29 crc kubenswrapper[4803]: E0127 22:40:29.308964 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:40:42 crc kubenswrapper[4803]: I0127 22:40:42.307015 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:40:42 crc kubenswrapper[4803]: E0127 22:40:42.307959 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:40:57 crc kubenswrapper[4803]: I0127 22:40:57.306644 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:40:57 crc kubenswrapper[4803]: I0127 22:40:57.948861 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"66ebb0459d51e52a323f553759add2a10dd54207ac59075aca12aa4ffd2e9a83"} Jan 27 22:43:16 crc kubenswrapper[4803]: I0127 22:43:16.343263 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:43:16 crc kubenswrapper[4803]: I0127 22:43:16.343969 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:43:46 crc kubenswrapper[4803]: I0127 22:43:46.343560 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:43:46 crc kubenswrapper[4803]: I0127 22:43:46.344460 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:44:16 crc kubenswrapper[4803]: I0127 22:44:16.343333 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:44:16 crc kubenswrapper[4803]: I0127 22:44:16.343980 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:44:16 crc kubenswrapper[4803]: I0127 22:44:16.344038 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:44:16 crc kubenswrapper[4803]: I0127 22:44:16.345174 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66ebb0459d51e52a323f553759add2a10dd54207ac59075aca12aa4ffd2e9a83"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:44:16 crc kubenswrapper[4803]: I0127 22:44:16.345239 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://66ebb0459d51e52a323f553759add2a10dd54207ac59075aca12aa4ffd2e9a83" gracePeriod=600 Jan 27 22:44:16 crc kubenswrapper[4803]: I0127 22:44:16.943083 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="66ebb0459d51e52a323f553759add2a10dd54207ac59075aca12aa4ffd2e9a83" exitCode=0 Jan 27 22:44:16 crc kubenswrapper[4803]: I0127 22:44:16.943241 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"66ebb0459d51e52a323f553759add2a10dd54207ac59075aca12aa4ffd2e9a83"} Jan 27 22:44:16 crc kubenswrapper[4803]: I0127 22:44:16.943581 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0"} Jan 27 22:44:16 crc kubenswrapper[4803]: I0127 22:44:16.943601 4803 scope.go:117] "RemoveContainer" containerID="f04efddd5f0a89aaa859e9223a7364b63efb443e71d91a2c1a438876994e301b" Jan 27 22:44:55 crc kubenswrapper[4803]: I0127 22:44:55.938202 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-scmx5"] Jan 27 22:44:55 crc kubenswrapper[4803]: E0127 22:44:55.939403 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3028236-a937-4b01-a16d-3df28e5ebc3d" containerName="extract-utilities" Jan 27 22:44:55 crc kubenswrapper[4803]: I0127 22:44:55.939421 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3028236-a937-4b01-a16d-3df28e5ebc3d" containerName="extract-utilities" Jan 27 22:44:55 crc kubenswrapper[4803]: E0127 22:44:55.939442 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3028236-a937-4b01-a16d-3df28e5ebc3d" containerName="registry-server" Jan 27 22:44:55 crc kubenswrapper[4803]: I0127 22:44:55.939449 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3028236-a937-4b01-a16d-3df28e5ebc3d" containerName="registry-server" Jan 27 22:44:55 crc kubenswrapper[4803]: E0127 22:44:55.939472 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3028236-a937-4b01-a16d-3df28e5ebc3d" containerName="extract-content" Jan 27 22:44:55 crc kubenswrapper[4803]: I0127 22:44:55.939480 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3028236-a937-4b01-a16d-3df28e5ebc3d" containerName="extract-content" Jan 27 22:44:55 crc kubenswrapper[4803]: E0127 22:44:55.939505 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c810905-c1c5-43c4-a774-de12c4d1ed59" containerName="extract-utilities" Jan 27 22:44:55 crc kubenswrapper[4803]: I0127 22:44:55.939512 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c810905-c1c5-43c4-a774-de12c4d1ed59" containerName="extract-utilities" Jan 27 22:44:55 crc kubenswrapper[4803]: E0127 22:44:55.939528 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c810905-c1c5-43c4-a774-de12c4d1ed59" containerName="registry-server" Jan 27 22:44:55 crc kubenswrapper[4803]: I0127 22:44:55.939536 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c810905-c1c5-43c4-a774-de12c4d1ed59" containerName="registry-server" Jan 27 22:44:55 crc kubenswrapper[4803]: E0127 22:44:55.939565 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c810905-c1c5-43c4-a774-de12c4d1ed59" containerName="extract-content" Jan 27 22:44:55 crc kubenswrapper[4803]: I0127 22:44:55.939573 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c810905-c1c5-43c4-a774-de12c4d1ed59" containerName="extract-content" Jan 27 22:44:55 crc kubenswrapper[4803]: I0127 22:44:55.939831 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3028236-a937-4b01-a16d-3df28e5ebc3d" containerName="registry-server" Jan 27 22:44:55 crc kubenswrapper[4803]: I0127 22:44:55.939894 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c810905-c1c5-43c4-a774-de12c4d1ed59" containerName="registry-server" Jan 27 22:44:55 crc kubenswrapper[4803]: I0127 22:44:55.942074 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:44:55 crc kubenswrapper[4803]: I0127 22:44:55.951770 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-scmx5"] Jan 27 22:44:56 crc kubenswrapper[4803]: I0127 22:44:56.033181 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-catalog-content\") pod \"certified-operators-scmx5\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:44:56 crc kubenswrapper[4803]: I0127 22:44:56.033294 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-utilities\") pod \"certified-operators-scmx5\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:44:56 crc kubenswrapper[4803]: I0127 22:44:56.033639 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsswh\" (UniqueName: \"kubernetes.io/projected/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-kube-api-access-gsswh\") pod \"certified-operators-scmx5\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:44:56 crc kubenswrapper[4803]: I0127 22:44:56.135318 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsswh\" (UniqueName: \"kubernetes.io/projected/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-kube-api-access-gsswh\") pod \"certified-operators-scmx5\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:44:56 crc kubenswrapper[4803]: I0127 22:44:56.135454 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-catalog-content\") pod \"certified-operators-scmx5\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:44:56 crc kubenswrapper[4803]: I0127 22:44:56.135504 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-utilities\") pod \"certified-operators-scmx5\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:44:56 crc kubenswrapper[4803]: I0127 22:44:56.135902 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-catalog-content\") pod \"certified-operators-scmx5\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:44:56 crc kubenswrapper[4803]: I0127 22:44:56.136172 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-utilities\") pod \"certified-operators-scmx5\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:44:56 crc kubenswrapper[4803]: I0127 22:44:56.157964 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsswh\" (UniqueName: \"kubernetes.io/projected/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-kube-api-access-gsswh\") pod \"certified-operators-scmx5\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:44:56 crc kubenswrapper[4803]: I0127 22:44:56.276930 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:44:56 crc kubenswrapper[4803]: I0127 22:44:56.744944 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-scmx5"] Jan 27 22:44:56 crc kubenswrapper[4803]: W0127 22:44:56.748331 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eef2cb1_b081_44c2_b78e_e39aa61ccab4.slice/crio-6f15552daa031498eb82e96a524495d6f19371116c7752771e6609c2f21728d5 WatchSource:0}: Error finding container 6f15552daa031498eb82e96a524495d6f19371116c7752771e6609c2f21728d5: Status 404 returned error can't find the container with id 6f15552daa031498eb82e96a524495d6f19371116c7752771e6609c2f21728d5 Jan 27 22:44:57 crc kubenswrapper[4803]: I0127 22:44:57.369442 4803 generic.go:334] "Generic (PLEG): container finished" podID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerID="f896f029f628b4219cf29ee691394191e0cbc8000ddea4eed975ae109b2b4d4e" exitCode=0 Jan 27 22:44:57 crc kubenswrapper[4803]: I0127 22:44:57.369486 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scmx5" event={"ID":"3eef2cb1-b081-44c2-b78e-e39aa61ccab4","Type":"ContainerDied","Data":"f896f029f628b4219cf29ee691394191e0cbc8000ddea4eed975ae109b2b4d4e"} Jan 27 22:44:57 crc kubenswrapper[4803]: I0127 22:44:57.369527 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scmx5" event={"ID":"3eef2cb1-b081-44c2-b78e-e39aa61ccab4","Type":"ContainerStarted","Data":"6f15552daa031498eb82e96a524495d6f19371116c7752771e6609c2f21728d5"} Jan 27 22:44:57 crc kubenswrapper[4803]: I0127 22:44:57.371796 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 22:44:59 crc kubenswrapper[4803]: I0127 22:44:59.392632 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scmx5" event={"ID":"3eef2cb1-b081-44c2-b78e-e39aa61ccab4","Type":"ContainerStarted","Data":"1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a"} Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.147231 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q"] Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.149779 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.153832 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.153975 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.169975 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q"] Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.241751 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c173b72-20fd-4abb-8613-4637ed383429-secret-volume\") pod \"collect-profiles-29492565-ps66q\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.241812 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c173b72-20fd-4abb-8613-4637ed383429-config-volume\") pod \"collect-profiles-29492565-ps66q\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.242086 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whld8\" (UniqueName: \"kubernetes.io/projected/9c173b72-20fd-4abb-8613-4637ed383429-kube-api-access-whld8\") pod \"collect-profiles-29492565-ps66q\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.349379 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whld8\" (UniqueName: \"kubernetes.io/projected/9c173b72-20fd-4abb-8613-4637ed383429-kube-api-access-whld8\") pod \"collect-profiles-29492565-ps66q\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.349638 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c173b72-20fd-4abb-8613-4637ed383429-secret-volume\") pod \"collect-profiles-29492565-ps66q\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.349711 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c173b72-20fd-4abb-8613-4637ed383429-config-volume\") pod \"collect-profiles-29492565-ps66q\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.358033 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c173b72-20fd-4abb-8613-4637ed383429-config-volume\") pod \"collect-profiles-29492565-ps66q\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.358782 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c173b72-20fd-4abb-8613-4637ed383429-secret-volume\") pod \"collect-profiles-29492565-ps66q\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.373513 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whld8\" (UniqueName: \"kubernetes.io/projected/9c173b72-20fd-4abb-8613-4637ed383429-kube-api-access-whld8\") pod \"collect-profiles-29492565-ps66q\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.407174 4803 generic.go:334] "Generic (PLEG): container finished" podID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerID="1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a" exitCode=0 Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.407334 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scmx5" event={"ID":"3eef2cb1-b081-44c2-b78e-e39aa61ccab4","Type":"ContainerDied","Data":"1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a"} Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.478367 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:00 crc kubenswrapper[4803]: I0127 22:45:00.964952 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q"] Jan 27 22:45:01 crc kubenswrapper[4803]: I0127 22:45:01.418757 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scmx5" event={"ID":"3eef2cb1-b081-44c2-b78e-e39aa61ccab4","Type":"ContainerStarted","Data":"40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5"} Jan 27 22:45:01 crc kubenswrapper[4803]: I0127 22:45:01.420073 4803 generic.go:334] "Generic (PLEG): container finished" podID="9c173b72-20fd-4abb-8613-4637ed383429" containerID="35c1dedc252a8f5705bce10ca09ae608777a664374a15b6b5d061c00915eec49" exitCode=0 Jan 27 22:45:01 crc kubenswrapper[4803]: I0127 22:45:01.420105 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" event={"ID":"9c173b72-20fd-4abb-8613-4637ed383429","Type":"ContainerDied","Data":"35c1dedc252a8f5705bce10ca09ae608777a664374a15b6b5d061c00915eec49"} Jan 27 22:45:01 crc kubenswrapper[4803]: I0127 22:45:01.420144 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" event={"ID":"9c173b72-20fd-4abb-8613-4637ed383429","Type":"ContainerStarted","Data":"aef6d12f7a1596680948ffa2108f9308ec66d3d807318a1b0a2df8a7bbd5aa57"} Jan 27 22:45:01 crc kubenswrapper[4803]: I0127 22:45:01.444787 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-scmx5" podStartSLOduration=2.906755752 podStartE2EDuration="6.444767686s" podCreationTimestamp="2026-01-27 22:44:55 +0000 UTC" firstStartedPulling="2026-01-27 22:44:57.371523751 +0000 UTC m=+3449.787545450" lastFinishedPulling="2026-01-27 22:45:00.909535685 +0000 UTC m=+3453.325557384" observedRunningTime="2026-01-27 22:45:01.439613556 +0000 UTC m=+3453.855635255" watchObservedRunningTime="2026-01-27 22:45:01.444767686 +0000 UTC m=+3453.860789385" Jan 27 22:45:02 crc kubenswrapper[4803]: I0127 22:45:02.833893 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:02 crc kubenswrapper[4803]: I0127 22:45:02.917228 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c173b72-20fd-4abb-8613-4637ed383429-config-volume\") pod \"9c173b72-20fd-4abb-8613-4637ed383429\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " Jan 27 22:45:02 crc kubenswrapper[4803]: I0127 22:45:02.917410 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whld8\" (UniqueName: \"kubernetes.io/projected/9c173b72-20fd-4abb-8613-4637ed383429-kube-api-access-whld8\") pod \"9c173b72-20fd-4abb-8613-4637ed383429\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " Jan 27 22:45:02 crc kubenswrapper[4803]: I0127 22:45:02.917488 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c173b72-20fd-4abb-8613-4637ed383429-secret-volume\") pod \"9c173b72-20fd-4abb-8613-4637ed383429\" (UID: \"9c173b72-20fd-4abb-8613-4637ed383429\") " Jan 27 22:45:02 crc kubenswrapper[4803]: I0127 22:45:02.917939 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c173b72-20fd-4abb-8613-4637ed383429-config-volume" (OuterVolumeSpecName: "config-volume") pod "9c173b72-20fd-4abb-8613-4637ed383429" (UID: "9c173b72-20fd-4abb-8613-4637ed383429"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:45:02 crc kubenswrapper[4803]: I0127 22:45:02.918831 4803 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c173b72-20fd-4abb-8613-4637ed383429-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 22:45:02 crc kubenswrapper[4803]: I0127 22:45:02.924783 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c173b72-20fd-4abb-8613-4637ed383429-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9c173b72-20fd-4abb-8613-4637ed383429" (UID: "9c173b72-20fd-4abb-8613-4637ed383429"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:45:02 crc kubenswrapper[4803]: I0127 22:45:02.928786 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c173b72-20fd-4abb-8613-4637ed383429-kube-api-access-whld8" (OuterVolumeSpecName: "kube-api-access-whld8") pod "9c173b72-20fd-4abb-8613-4637ed383429" (UID: "9c173b72-20fd-4abb-8613-4637ed383429"). InnerVolumeSpecName "kube-api-access-whld8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:45:03 crc kubenswrapper[4803]: I0127 22:45:03.026018 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whld8\" (UniqueName: \"kubernetes.io/projected/9c173b72-20fd-4abb-8613-4637ed383429-kube-api-access-whld8\") on node \"crc\" DevicePath \"\"" Jan 27 22:45:03 crc kubenswrapper[4803]: I0127 22:45:03.026054 4803 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c173b72-20fd-4abb-8613-4637ed383429-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 22:45:03 crc kubenswrapper[4803]: I0127 22:45:03.452808 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" event={"ID":"9c173b72-20fd-4abb-8613-4637ed383429","Type":"ContainerDied","Data":"aef6d12f7a1596680948ffa2108f9308ec66d3d807318a1b0a2df8a7bbd5aa57"} Jan 27 22:45:03 crc kubenswrapper[4803]: I0127 22:45:03.452926 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aef6d12f7a1596680948ffa2108f9308ec66d3d807318a1b0a2df8a7bbd5aa57" Jan 27 22:45:03 crc kubenswrapper[4803]: I0127 22:45:03.453000 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492565-ps66q" Jan 27 22:45:03 crc kubenswrapper[4803]: I0127 22:45:03.922489 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm"] Jan 27 22:45:03 crc kubenswrapper[4803]: I0127 22:45:03.937323 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492520-bkldm"] Jan 27 22:45:04 crc kubenswrapper[4803]: I0127 22:45:04.323361 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ce2383-1f63-4adf-964b-2e6769ac9957" path="/var/lib/kubelet/pods/c9ce2383-1f63-4adf-964b-2e6769ac9957/volumes" Jan 27 22:45:06 crc kubenswrapper[4803]: I0127 22:45:06.277183 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:45:06 crc kubenswrapper[4803]: I0127 22:45:06.277804 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:45:07 crc kubenswrapper[4803]: I0127 22:45:07.325103 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-scmx5" podUID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerName="registry-server" probeResult="failure" output=< Jan 27 22:45:07 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:45:07 crc kubenswrapper[4803]: > Jan 27 22:45:16 crc kubenswrapper[4803]: I0127 22:45:16.341349 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:45:16 crc kubenswrapper[4803]: I0127 22:45:16.395808 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:45:16 crc kubenswrapper[4803]: I0127 22:45:16.582177 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-scmx5"] Jan 27 22:45:17 crc kubenswrapper[4803]: I0127 22:45:17.601253 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-scmx5" podUID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerName="registry-server" containerID="cri-o://40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5" gracePeriod=2 Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.150983 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.288282 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsswh\" (UniqueName: \"kubernetes.io/projected/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-kube-api-access-gsswh\") pod \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.288442 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-catalog-content\") pod \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.288563 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-utilities\") pod \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\" (UID: \"3eef2cb1-b081-44c2-b78e-e39aa61ccab4\") " Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.289312 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-utilities" (OuterVolumeSpecName: "utilities") pod "3eef2cb1-b081-44c2-b78e-e39aa61ccab4" (UID: "3eef2cb1-b081-44c2-b78e-e39aa61ccab4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.294804 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-kube-api-access-gsswh" (OuterVolumeSpecName: "kube-api-access-gsswh") pod "3eef2cb1-b081-44c2-b78e-e39aa61ccab4" (UID: "3eef2cb1-b081-44c2-b78e-e39aa61ccab4"). InnerVolumeSpecName "kube-api-access-gsswh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.331798 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3eef2cb1-b081-44c2-b78e-e39aa61ccab4" (UID: "3eef2cb1-b081-44c2-b78e-e39aa61ccab4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.391530 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.391832 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.391923 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsswh\" (UniqueName: \"kubernetes.io/projected/3eef2cb1-b081-44c2-b78e-e39aa61ccab4-kube-api-access-gsswh\") on node \"crc\" DevicePath \"\"" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.613985 4803 generic.go:334] "Generic (PLEG): container finished" podID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerID="40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5" exitCode=0 Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.614030 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scmx5" event={"ID":"3eef2cb1-b081-44c2-b78e-e39aa61ccab4","Type":"ContainerDied","Data":"40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5"} Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.614060 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scmx5" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.614081 4803 scope.go:117] "RemoveContainer" containerID="40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.614065 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scmx5" event={"ID":"3eef2cb1-b081-44c2-b78e-e39aa61ccab4","Type":"ContainerDied","Data":"6f15552daa031498eb82e96a524495d6f19371116c7752771e6609c2f21728d5"} Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.649444 4803 scope.go:117] "RemoveContainer" containerID="1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.656652 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-scmx5"] Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.667411 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-scmx5"] Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.689142 4803 scope.go:117] "RemoveContainer" containerID="f896f029f628b4219cf29ee691394191e0cbc8000ddea4eed975ae109b2b4d4e" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.730810 4803 scope.go:117] "RemoveContainer" containerID="40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5" Jan 27 22:45:18 crc kubenswrapper[4803]: E0127 22:45:18.731358 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5\": container with ID starting with 40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5 not found: ID does not exist" containerID="40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.731387 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5"} err="failed to get container status \"40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5\": rpc error: code = NotFound desc = could not find container \"40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5\": container with ID starting with 40f9fa95a903413d8217ff09ee24d0d81bc82ad81b43ac8292a10933e17b95f5 not found: ID does not exist" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.731412 4803 scope.go:117] "RemoveContainer" containerID="1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a" Jan 27 22:45:18 crc kubenswrapper[4803]: E0127 22:45:18.731779 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a\": container with ID starting with 1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a not found: ID does not exist" containerID="1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.731800 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a"} err="failed to get container status \"1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a\": rpc error: code = NotFound desc = could not find container \"1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a\": container with ID starting with 1e2f5d644a3ee16489851cd3a6bd3e321c67dfd71662cd519d01b573588b382a not found: ID does not exist" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.731816 4803 scope.go:117] "RemoveContainer" containerID="f896f029f628b4219cf29ee691394191e0cbc8000ddea4eed975ae109b2b4d4e" Jan 27 22:45:18 crc kubenswrapper[4803]: E0127 22:45:18.732160 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f896f029f628b4219cf29ee691394191e0cbc8000ddea4eed975ae109b2b4d4e\": container with ID starting with f896f029f628b4219cf29ee691394191e0cbc8000ddea4eed975ae109b2b4d4e not found: ID does not exist" containerID="f896f029f628b4219cf29ee691394191e0cbc8000ddea4eed975ae109b2b4d4e" Jan 27 22:45:18 crc kubenswrapper[4803]: I0127 22:45:18.732181 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f896f029f628b4219cf29ee691394191e0cbc8000ddea4eed975ae109b2b4d4e"} err="failed to get container status \"f896f029f628b4219cf29ee691394191e0cbc8000ddea4eed975ae109b2b4d4e\": rpc error: code = NotFound desc = could not find container \"f896f029f628b4219cf29ee691394191e0cbc8000ddea4eed975ae109b2b4d4e\": container with ID starting with f896f029f628b4219cf29ee691394191e0cbc8000ddea4eed975ae109b2b4d4e not found: ID does not exist" Jan 27 22:45:20 crc kubenswrapper[4803]: I0127 22:45:20.324522 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" path="/var/lib/kubelet/pods/3eef2cb1-b081-44c2-b78e-e39aa61ccab4/volumes" Jan 27 22:45:35 crc kubenswrapper[4803]: I0127 22:45:35.146402 4803 scope.go:117] "RemoveContainer" containerID="af0604cdc2a15ae5769e54836ad8d72d933fafa6ccb30db1ea870d4dde063135" Jan 27 22:45:55 crc kubenswrapper[4803]: I0127 22:45:55.348923 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" podUID="2beb4659-d63e-495f-a32f-f94cbcbbc1ce" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.229446 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c2zkb"] Jan 27 22:46:00 crc kubenswrapper[4803]: E0127 22:46:00.230879 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerName="extract-utilities" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.230901 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerName="extract-utilities" Jan 27 22:46:00 crc kubenswrapper[4803]: E0127 22:46:00.230940 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerName="registry-server" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.230949 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerName="registry-server" Jan 27 22:46:00 crc kubenswrapper[4803]: E0127 22:46:00.230989 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c173b72-20fd-4abb-8613-4637ed383429" containerName="collect-profiles" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.230999 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c173b72-20fd-4abb-8613-4637ed383429" containerName="collect-profiles" Jan 27 22:46:00 crc kubenswrapper[4803]: E0127 22:46:00.231026 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerName="extract-content" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.231034 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerName="extract-content" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.231405 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eef2cb1-b081-44c2-b78e-e39aa61ccab4" containerName="registry-server" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.231449 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c173b72-20fd-4abb-8613-4637ed383429" containerName="collect-profiles" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.233791 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.252616 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c2zkb"] Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.340111 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkwh\" (UniqueName: \"kubernetes.io/projected/5770b641-6c17-4cb5-af5e-0f2b838bcc69-kube-api-access-zwkwh\") pod \"redhat-operators-c2zkb\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.340349 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-utilities\") pod \"redhat-operators-c2zkb\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.340415 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-catalog-content\") pod \"redhat-operators-c2zkb\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.442746 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-catalog-content\") pod \"redhat-operators-c2zkb\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.442837 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkwh\" (UniqueName: \"kubernetes.io/projected/5770b641-6c17-4cb5-af5e-0f2b838bcc69-kube-api-access-zwkwh\") pod \"redhat-operators-c2zkb\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.442979 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-utilities\") pod \"redhat-operators-c2zkb\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.443425 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-utilities\") pod \"redhat-operators-c2zkb\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.443610 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-catalog-content\") pod \"redhat-operators-c2zkb\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.465423 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwkwh\" (UniqueName: \"kubernetes.io/projected/5770b641-6c17-4cb5-af5e-0f2b838bcc69-kube-api-access-zwkwh\") pod \"redhat-operators-c2zkb\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:00 crc kubenswrapper[4803]: I0127 22:46:00.554636 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:01 crc kubenswrapper[4803]: I0127 22:46:01.106571 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c2zkb"] Jan 27 22:46:02 crc kubenswrapper[4803]: I0127 22:46:02.084974 4803 generic.go:334] "Generic (PLEG): container finished" podID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerID="d76f6598a8af91a41d948c8861e5e8ac901a87dadde41d19bd3c635298014734" exitCode=0 Jan 27 22:46:02 crc kubenswrapper[4803]: I0127 22:46:02.085042 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2zkb" event={"ID":"5770b641-6c17-4cb5-af5e-0f2b838bcc69","Type":"ContainerDied","Data":"d76f6598a8af91a41d948c8861e5e8ac901a87dadde41d19bd3c635298014734"} Jan 27 22:46:02 crc kubenswrapper[4803]: I0127 22:46:02.085314 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2zkb" event={"ID":"5770b641-6c17-4cb5-af5e-0f2b838bcc69","Type":"ContainerStarted","Data":"c9c7b1e29b3207a07c59024fe350eeba3397ad50eee1bf0e94a19caa19847996"} Jan 27 22:46:03 crc kubenswrapper[4803]: I0127 22:46:03.101370 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2zkb" event={"ID":"5770b641-6c17-4cb5-af5e-0f2b838bcc69","Type":"ContainerStarted","Data":"5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044"} Jan 27 22:46:07 crc kubenswrapper[4803]: I0127 22:46:07.140418 4803 generic.go:334] "Generic (PLEG): container finished" podID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerID="5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044" exitCode=0 Jan 27 22:46:07 crc kubenswrapper[4803]: I0127 22:46:07.140481 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2zkb" event={"ID":"5770b641-6c17-4cb5-af5e-0f2b838bcc69","Type":"ContainerDied","Data":"5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044"} Jan 27 22:46:08 crc kubenswrapper[4803]: I0127 22:46:08.156499 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2zkb" event={"ID":"5770b641-6c17-4cb5-af5e-0f2b838bcc69","Type":"ContainerStarted","Data":"b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382"} Jan 27 22:46:08 crc kubenswrapper[4803]: I0127 22:46:08.175721 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c2zkb" podStartSLOduration=2.737512066 podStartE2EDuration="8.175698383s" podCreationTimestamp="2026-01-27 22:46:00 +0000 UTC" firstStartedPulling="2026-01-27 22:46:02.087158967 +0000 UTC m=+3514.503180666" lastFinishedPulling="2026-01-27 22:46:07.525345284 +0000 UTC m=+3519.941366983" observedRunningTime="2026-01-27 22:46:08.17523173 +0000 UTC m=+3520.591253429" watchObservedRunningTime="2026-01-27 22:46:08.175698383 +0000 UTC m=+3520.591720082" Jan 27 22:46:10 crc kubenswrapper[4803]: I0127 22:46:10.556157 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:10 crc kubenswrapper[4803]: I0127 22:46:10.556755 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:11 crc kubenswrapper[4803]: I0127 22:46:11.599113 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c2zkb" podUID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerName="registry-server" probeResult="failure" output=< Jan 27 22:46:11 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:46:11 crc kubenswrapper[4803]: > Jan 27 22:46:16 crc kubenswrapper[4803]: I0127 22:46:16.343298 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:46:16 crc kubenswrapper[4803]: I0127 22:46:16.344004 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:46:21 crc kubenswrapper[4803]: I0127 22:46:21.599527 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c2zkb" podUID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerName="registry-server" probeResult="failure" output=< Jan 27 22:46:21 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:46:21 crc kubenswrapper[4803]: > Jan 27 22:46:30 crc kubenswrapper[4803]: I0127 22:46:30.607292 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:30 crc kubenswrapper[4803]: I0127 22:46:30.659933 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:31 crc kubenswrapper[4803]: I0127 22:46:31.426952 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c2zkb"] Jan 27 22:46:32 crc kubenswrapper[4803]: I0127 22:46:32.397979 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c2zkb" podUID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerName="registry-server" containerID="cri-o://b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382" gracePeriod=2 Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.070906 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.227278 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwkwh\" (UniqueName: \"kubernetes.io/projected/5770b641-6c17-4cb5-af5e-0f2b838bcc69-kube-api-access-zwkwh\") pod \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.227376 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-catalog-content\") pod \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.227425 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-utilities\") pod \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\" (UID: \"5770b641-6c17-4cb5-af5e-0f2b838bcc69\") " Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.228433 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-utilities" (OuterVolumeSpecName: "utilities") pod "5770b641-6c17-4cb5-af5e-0f2b838bcc69" (UID: "5770b641-6c17-4cb5-af5e-0f2b838bcc69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.250619 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5770b641-6c17-4cb5-af5e-0f2b838bcc69-kube-api-access-zwkwh" (OuterVolumeSpecName: "kube-api-access-zwkwh") pod "5770b641-6c17-4cb5-af5e-0f2b838bcc69" (UID: "5770b641-6c17-4cb5-af5e-0f2b838bcc69"). InnerVolumeSpecName "kube-api-access-zwkwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.330100 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwkwh\" (UniqueName: \"kubernetes.io/projected/5770b641-6c17-4cb5-af5e-0f2b838bcc69-kube-api-access-zwkwh\") on node \"crc\" DevicePath \"\"" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.330148 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.365226 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5770b641-6c17-4cb5-af5e-0f2b838bcc69" (UID: "5770b641-6c17-4cb5-af5e-0f2b838bcc69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.411447 4803 generic.go:334] "Generic (PLEG): container finished" podID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerID="b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382" exitCode=0 Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.411510 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2zkb" event={"ID":"5770b641-6c17-4cb5-af5e-0f2b838bcc69","Type":"ContainerDied","Data":"b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382"} Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.411541 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2zkb" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.411576 4803 scope.go:117] "RemoveContainer" containerID="b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.411561 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2zkb" event={"ID":"5770b641-6c17-4cb5-af5e-0f2b838bcc69","Type":"ContainerDied","Data":"c9c7b1e29b3207a07c59024fe350eeba3397ad50eee1bf0e94a19caa19847996"} Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.433150 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5770b641-6c17-4cb5-af5e-0f2b838bcc69-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.433442 4803 scope.go:117] "RemoveContainer" containerID="5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.462501 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c2zkb"] Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.491959 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c2zkb"] Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.499016 4803 scope.go:117] "RemoveContainer" containerID="d76f6598a8af91a41d948c8861e5e8ac901a87dadde41d19bd3c635298014734" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.528405 4803 scope.go:117] "RemoveContainer" containerID="b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382" Jan 27 22:46:33 crc kubenswrapper[4803]: E0127 22:46:33.529143 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382\": container with ID starting with b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382 not found: ID does not exist" containerID="b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.529207 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382"} err="failed to get container status \"b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382\": rpc error: code = NotFound desc = could not find container \"b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382\": container with ID starting with b606257bcd5c43d770f22123f07c1ca6bdeb40412f5f4508d0aa05560018f382 not found: ID does not exist" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.529250 4803 scope.go:117] "RemoveContainer" containerID="5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044" Jan 27 22:46:33 crc kubenswrapper[4803]: E0127 22:46:33.529651 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044\": container with ID starting with 5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044 not found: ID does not exist" containerID="5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.529704 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044"} err="failed to get container status \"5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044\": rpc error: code = NotFound desc = could not find container \"5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044\": container with ID starting with 5b5eaaa2b9061414f81e837a25e21f1776acf68bafe69118e5afc9a600621044 not found: ID does not exist" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.529747 4803 scope.go:117] "RemoveContainer" containerID="d76f6598a8af91a41d948c8861e5e8ac901a87dadde41d19bd3c635298014734" Jan 27 22:46:33 crc kubenswrapper[4803]: E0127 22:46:33.530054 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76f6598a8af91a41d948c8861e5e8ac901a87dadde41d19bd3c635298014734\": container with ID starting with d76f6598a8af91a41d948c8861e5e8ac901a87dadde41d19bd3c635298014734 not found: ID does not exist" containerID="d76f6598a8af91a41d948c8861e5e8ac901a87dadde41d19bd3c635298014734" Jan 27 22:46:33 crc kubenswrapper[4803]: I0127 22:46:33.530078 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76f6598a8af91a41d948c8861e5e8ac901a87dadde41d19bd3c635298014734"} err="failed to get container status \"d76f6598a8af91a41d948c8861e5e8ac901a87dadde41d19bd3c635298014734\": rpc error: code = NotFound desc = could not find container \"d76f6598a8af91a41d948c8861e5e8ac901a87dadde41d19bd3c635298014734\": container with ID starting with d76f6598a8af91a41d948c8861e5e8ac901a87dadde41d19bd3c635298014734 not found: ID does not exist" Jan 27 22:46:34 crc kubenswrapper[4803]: I0127 22:46:34.326690 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" path="/var/lib/kubelet/pods/5770b641-6c17-4cb5-af5e-0f2b838bcc69/volumes" Jan 27 22:46:46 crc kubenswrapper[4803]: I0127 22:46:46.343430 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:46:46 crc kubenswrapper[4803]: I0127 22:46:46.344119 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:47:16 crc kubenswrapper[4803]: I0127 22:47:16.343088 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:47:16 crc kubenswrapper[4803]: I0127 22:47:16.343529 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:47:16 crc kubenswrapper[4803]: I0127 22:47:16.343571 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:47:16 crc kubenswrapper[4803]: I0127 22:47:16.344471 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:47:16 crc kubenswrapper[4803]: I0127 22:47:16.344526 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" gracePeriod=600 Jan 27 22:47:16 crc kubenswrapper[4803]: E0127 22:47:16.463578 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:47:16 crc kubenswrapper[4803]: I0127 22:47:16.879627 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" exitCode=0 Jan 27 22:47:16 crc kubenswrapper[4803]: I0127 22:47:16.879686 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0"} Jan 27 22:47:16 crc kubenswrapper[4803]: I0127 22:47:16.880314 4803 scope.go:117] "RemoveContainer" containerID="66ebb0459d51e52a323f553759add2a10dd54207ac59075aca12aa4ffd2e9a83" Jan 27 22:47:16 crc kubenswrapper[4803]: I0127 22:47:16.881274 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:47:16 crc kubenswrapper[4803]: E0127 22:47:16.881714 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:47:27 crc kubenswrapper[4803]: I0127 22:47:27.307666 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:47:27 crc kubenswrapper[4803]: E0127 22:47:27.308751 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:47:40 crc kubenswrapper[4803]: I0127 22:47:40.307101 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:47:40 crc kubenswrapper[4803]: E0127 22:47:40.308020 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:47:51 crc kubenswrapper[4803]: I0127 22:47:51.307134 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:47:51 crc kubenswrapper[4803]: E0127 22:47:51.307999 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:48:03 crc kubenswrapper[4803]: I0127 22:48:03.307293 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:48:03 crc kubenswrapper[4803]: E0127 22:48:03.308075 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:48:16 crc kubenswrapper[4803]: I0127 22:48:16.307362 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:48:16 crc kubenswrapper[4803]: E0127 22:48:16.308251 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:48:30 crc kubenswrapper[4803]: I0127 22:48:30.307029 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:48:30 crc kubenswrapper[4803]: E0127 22:48:30.307680 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:48:42 crc kubenswrapper[4803]: I0127 22:48:42.307731 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:48:42 crc kubenswrapper[4803]: E0127 22:48:42.308532 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:48:57 crc kubenswrapper[4803]: I0127 22:48:57.306681 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:48:57 crc kubenswrapper[4803]: E0127 22:48:57.308590 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:49:12 crc kubenswrapper[4803]: I0127 22:49:12.307004 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:49:12 crc kubenswrapper[4803]: E0127 22:49:12.307891 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:49:23 crc kubenswrapper[4803]: I0127 22:49:23.307245 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:49:23 crc kubenswrapper[4803]: E0127 22:49:23.308158 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:49:36 crc kubenswrapper[4803]: I0127 22:49:36.313011 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:49:36 crc kubenswrapper[4803]: E0127 22:49:36.314082 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:49:48 crc kubenswrapper[4803]: I0127 22:49:48.315132 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:49:48 crc kubenswrapper[4803]: E0127 22:49:48.316318 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.306989 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:50:00 crc kubenswrapper[4803]: E0127 22:50:00.307811 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.503126 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zht7l"] Jan 27 22:50:00 crc kubenswrapper[4803]: E0127 22:50:00.503744 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerName="registry-server" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.503771 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerName="registry-server" Jan 27 22:50:00 crc kubenswrapper[4803]: E0127 22:50:00.503805 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerName="extract-utilities" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.503816 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerName="extract-utilities" Jan 27 22:50:00 crc kubenswrapper[4803]: E0127 22:50:00.503866 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerName="extract-content" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.503877 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerName="extract-content" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.504265 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="5770b641-6c17-4cb5-af5e-0f2b838bcc69" containerName="registry-server" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.506622 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.518725 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zht7l"] Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.530330 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-catalog-content\") pod \"community-operators-zht7l\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.530383 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwt49\" (UniqueName: \"kubernetes.io/projected/87648113-4e3f-4719-9e68-218575b10cdc-kube-api-access-zwt49\") pod \"community-operators-zht7l\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.530422 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-utilities\") pod \"community-operators-zht7l\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.633417 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-catalog-content\") pod \"community-operators-zht7l\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.633459 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwt49\" (UniqueName: \"kubernetes.io/projected/87648113-4e3f-4719-9e68-218575b10cdc-kube-api-access-zwt49\") pod \"community-operators-zht7l\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.633495 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-utilities\") pod \"community-operators-zht7l\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.634048 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-catalog-content\") pod \"community-operators-zht7l\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.637101 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-utilities\") pod \"community-operators-zht7l\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.658728 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwt49\" (UniqueName: \"kubernetes.io/projected/87648113-4e3f-4719-9e68-218575b10cdc-kube-api-access-zwt49\") pod \"community-operators-zht7l\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:00 crc kubenswrapper[4803]: I0127 22:50:00.833420 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:01 crc kubenswrapper[4803]: I0127 22:50:01.445579 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zht7l"] Jan 27 22:50:01 crc kubenswrapper[4803]: I0127 22:50:01.604484 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zht7l" event={"ID":"87648113-4e3f-4719-9e68-218575b10cdc","Type":"ContainerStarted","Data":"d861ebf9cd99e5ba6cddd6b904f8b47581880e1c48d365c22479d9d3a5cf0c1c"} Jan 27 22:50:02 crc kubenswrapper[4803]: I0127 22:50:02.615626 4803 generic.go:334] "Generic (PLEG): container finished" podID="87648113-4e3f-4719-9e68-218575b10cdc" containerID="fc7ce4a4328cbc03fae29770a9f1c989685507aae03992e68395f4898ce871d9" exitCode=0 Jan 27 22:50:02 crc kubenswrapper[4803]: I0127 22:50:02.615705 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zht7l" event={"ID":"87648113-4e3f-4719-9e68-218575b10cdc","Type":"ContainerDied","Data":"fc7ce4a4328cbc03fae29770a9f1c989685507aae03992e68395f4898ce871d9"} Jan 27 22:50:02 crc kubenswrapper[4803]: I0127 22:50:02.618635 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 22:50:03 crc kubenswrapper[4803]: I0127 22:50:03.629528 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zht7l" event={"ID":"87648113-4e3f-4719-9e68-218575b10cdc","Type":"ContainerStarted","Data":"89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f"} Jan 27 22:50:05 crc kubenswrapper[4803]: I0127 22:50:05.649517 4803 generic.go:334] "Generic (PLEG): container finished" podID="87648113-4e3f-4719-9e68-218575b10cdc" containerID="89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f" exitCode=0 Jan 27 22:50:05 crc kubenswrapper[4803]: I0127 22:50:05.649619 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zht7l" event={"ID":"87648113-4e3f-4719-9e68-218575b10cdc","Type":"ContainerDied","Data":"89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f"} Jan 27 22:50:06 crc kubenswrapper[4803]: I0127 22:50:06.663347 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zht7l" event={"ID":"87648113-4e3f-4719-9e68-218575b10cdc","Type":"ContainerStarted","Data":"18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0"} Jan 27 22:50:06 crc kubenswrapper[4803]: I0127 22:50:06.679971 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zht7l" podStartSLOduration=3.229899836 podStartE2EDuration="6.679945465s" podCreationTimestamp="2026-01-27 22:50:00 +0000 UTC" firstStartedPulling="2026-01-27 22:50:02.618296073 +0000 UTC m=+3755.034317772" lastFinishedPulling="2026-01-27 22:50:06.068341702 +0000 UTC m=+3758.484363401" observedRunningTime="2026-01-27 22:50:06.678334102 +0000 UTC m=+3759.094355821" watchObservedRunningTime="2026-01-27 22:50:06.679945465 +0000 UTC m=+3759.095967194" Jan 27 22:50:10 crc kubenswrapper[4803]: I0127 22:50:10.834599 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:10 crc kubenswrapper[4803]: I0127 22:50:10.836965 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:10 crc kubenswrapper[4803]: I0127 22:50:10.885718 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:11 crc kubenswrapper[4803]: I0127 22:50:11.770204 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:11 crc kubenswrapper[4803]: I0127 22:50:11.821546 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zht7l"] Jan 27 22:50:13 crc kubenswrapper[4803]: I0127 22:50:13.728463 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zht7l" podUID="87648113-4e3f-4719-9e68-218575b10cdc" containerName="registry-server" containerID="cri-o://18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0" gracePeriod=2 Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.236135 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.269151 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-utilities\") pod \"87648113-4e3f-4719-9e68-218575b10cdc\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.269439 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwt49\" (UniqueName: \"kubernetes.io/projected/87648113-4e3f-4719-9e68-218575b10cdc-kube-api-access-zwt49\") pod \"87648113-4e3f-4719-9e68-218575b10cdc\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.269517 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-catalog-content\") pod \"87648113-4e3f-4719-9e68-218575b10cdc\" (UID: \"87648113-4e3f-4719-9e68-218575b10cdc\") " Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.286448 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-utilities" (OuterVolumeSpecName: "utilities") pod "87648113-4e3f-4719-9e68-218575b10cdc" (UID: "87648113-4e3f-4719-9e68-218575b10cdc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.316234 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87648113-4e3f-4719-9e68-218575b10cdc-kube-api-access-zwt49" (OuterVolumeSpecName: "kube-api-access-zwt49") pod "87648113-4e3f-4719-9e68-218575b10cdc" (UID: "87648113-4e3f-4719-9e68-218575b10cdc"). InnerVolumeSpecName "kube-api-access-zwt49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.324450 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:50:14 crc kubenswrapper[4803]: E0127 22:50:14.328061 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.351019 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87648113-4e3f-4719-9e68-218575b10cdc" (UID: "87648113-4e3f-4719-9e68-218575b10cdc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.376202 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.376236 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwt49\" (UniqueName: \"kubernetes.io/projected/87648113-4e3f-4719-9e68-218575b10cdc-kube-api-access-zwt49\") on node \"crc\" DevicePath \"\"" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.376250 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87648113-4e3f-4719-9e68-218575b10cdc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.741117 4803 generic.go:334] "Generic (PLEG): container finished" podID="87648113-4e3f-4719-9e68-218575b10cdc" containerID="18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0" exitCode=0 Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.741170 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zht7l" event={"ID":"87648113-4e3f-4719-9e68-218575b10cdc","Type":"ContainerDied","Data":"18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0"} Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.741202 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zht7l" event={"ID":"87648113-4e3f-4719-9e68-218575b10cdc","Type":"ContainerDied","Data":"d861ebf9cd99e5ba6cddd6b904f8b47581880e1c48d365c22479d9d3a5cf0c1c"} Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.741222 4803 scope.go:117] "RemoveContainer" containerID="18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.741264 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zht7l" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.761413 4803 scope.go:117] "RemoveContainer" containerID="89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.785820 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zht7l"] Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.800177 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zht7l"] Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.819915 4803 scope.go:117] "RemoveContainer" containerID="fc7ce4a4328cbc03fae29770a9f1c989685507aae03992e68395f4898ce871d9" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.852983 4803 scope.go:117] "RemoveContainer" containerID="18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0" Jan 27 22:50:14 crc kubenswrapper[4803]: E0127 22:50:14.853719 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0\": container with ID starting with 18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0 not found: ID does not exist" containerID="18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.853757 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0"} err="failed to get container status \"18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0\": rpc error: code = NotFound desc = could not find container \"18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0\": container with ID starting with 18941c737cc82911d4b0a55a579ecdd481f21e03dc283d9ca2680003cd3b80b0 not found: ID does not exist" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.853781 4803 scope.go:117] "RemoveContainer" containerID="89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f" Jan 27 22:50:14 crc kubenswrapper[4803]: E0127 22:50:14.855343 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f\": container with ID starting with 89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f not found: ID does not exist" containerID="89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.855402 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f"} err="failed to get container status \"89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f\": rpc error: code = NotFound desc = could not find container \"89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f\": container with ID starting with 89c368dbade671b1cb39cc19e61cfbe43c36e999640af54f6cbbe341bfc31b5f not found: ID does not exist" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.855450 4803 scope.go:117] "RemoveContainer" containerID="fc7ce4a4328cbc03fae29770a9f1c989685507aae03992e68395f4898ce871d9" Jan 27 22:50:14 crc kubenswrapper[4803]: E0127 22:50:14.855968 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc7ce4a4328cbc03fae29770a9f1c989685507aae03992e68395f4898ce871d9\": container with ID starting with fc7ce4a4328cbc03fae29770a9f1c989685507aae03992e68395f4898ce871d9 not found: ID does not exist" containerID="fc7ce4a4328cbc03fae29770a9f1c989685507aae03992e68395f4898ce871d9" Jan 27 22:50:14 crc kubenswrapper[4803]: I0127 22:50:14.855998 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc7ce4a4328cbc03fae29770a9f1c989685507aae03992e68395f4898ce871d9"} err="failed to get container status \"fc7ce4a4328cbc03fae29770a9f1c989685507aae03992e68395f4898ce871d9\": rpc error: code = NotFound desc = could not find container \"fc7ce4a4328cbc03fae29770a9f1c989685507aae03992e68395f4898ce871d9\": container with ID starting with fc7ce4a4328cbc03fae29770a9f1c989685507aae03992e68395f4898ce871d9 not found: ID does not exist" Jan 27 22:50:16 crc kubenswrapper[4803]: I0127 22:50:16.318843 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87648113-4e3f-4719-9e68-218575b10cdc" path="/var/lib/kubelet/pods/87648113-4e3f-4719-9e68-218575b10cdc/volumes" Jan 27 22:50:29 crc kubenswrapper[4803]: I0127 22:50:29.307178 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:50:29 crc kubenswrapper[4803]: E0127 22:50:29.307977 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:50:41 crc kubenswrapper[4803]: I0127 22:50:41.307654 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:50:41 crc kubenswrapper[4803]: E0127 22:50:41.308533 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:50:56 crc kubenswrapper[4803]: I0127 22:50:56.307306 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:50:56 crc kubenswrapper[4803]: E0127 22:50:56.308093 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:51:07 crc kubenswrapper[4803]: I0127 22:51:07.307505 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:51:07 crc kubenswrapper[4803]: E0127 22:51:07.308379 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:51:18 crc kubenswrapper[4803]: I0127 22:51:18.317921 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:51:18 crc kubenswrapper[4803]: E0127 22:51:18.320489 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:51:33 crc kubenswrapper[4803]: I0127 22:51:33.307180 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:51:33 crc kubenswrapper[4803]: E0127 22:51:33.308236 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:51:44 crc kubenswrapper[4803]: I0127 22:51:44.307552 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:51:44 crc kubenswrapper[4803]: E0127 22:51:44.309112 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:51:57 crc kubenswrapper[4803]: I0127 22:51:57.306906 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:51:57 crc kubenswrapper[4803]: E0127 22:51:57.307715 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:52:12 crc kubenswrapper[4803]: I0127 22:52:12.306744 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:52:12 crc kubenswrapper[4803]: E0127 22:52:12.307680 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:52:26 crc kubenswrapper[4803]: I0127 22:52:26.308361 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:52:27 crc kubenswrapper[4803]: I0127 22:52:27.101633 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"ee4ce493c1e5d5c7ba473144f65c9a2ec956a1a31df6273a4a67a184d07d2e5a"} Jan 27 22:54:46 crc kubenswrapper[4803]: I0127 22:54:46.343938 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:54:46 crc kubenswrapper[4803]: I0127 22:54:46.344655 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:55:16 crc kubenswrapper[4803]: I0127 22:55:16.343241 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:55:16 crc kubenswrapper[4803]: I0127 22:55:16.343872 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.076773 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2c9gf"] Jan 27 22:55:17 crc kubenswrapper[4803]: E0127 22:55:17.077526 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87648113-4e3f-4719-9e68-218575b10cdc" containerName="extract-content" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.077555 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="87648113-4e3f-4719-9e68-218575b10cdc" containerName="extract-content" Jan 27 22:55:17 crc kubenswrapper[4803]: E0127 22:55:17.077600 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87648113-4e3f-4719-9e68-218575b10cdc" containerName="registry-server" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.077609 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="87648113-4e3f-4719-9e68-218575b10cdc" containerName="registry-server" Jan 27 22:55:17 crc kubenswrapper[4803]: E0127 22:55:17.077626 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87648113-4e3f-4719-9e68-218575b10cdc" containerName="extract-utilities" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.077634 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="87648113-4e3f-4719-9e68-218575b10cdc" containerName="extract-utilities" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.077938 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="87648113-4e3f-4719-9e68-218575b10cdc" containerName="registry-server" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.080155 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.088816 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2c9gf"] Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.120862 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-utilities\") pod \"certified-operators-2c9gf\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.121287 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-catalog-content\") pod \"certified-operators-2c9gf\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.121326 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfm59\" (UniqueName: \"kubernetes.io/projected/faeed815-483a-4f4c-afce-47a0a35822a4-kube-api-access-rfm59\") pod \"certified-operators-2c9gf\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.224012 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-catalog-content\") pod \"certified-operators-2c9gf\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.224073 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfm59\" (UniqueName: \"kubernetes.io/projected/faeed815-483a-4f4c-afce-47a0a35822a4-kube-api-access-rfm59\") pod \"certified-operators-2c9gf\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.224186 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-utilities\") pod \"certified-operators-2c9gf\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.224610 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-catalog-content\") pod \"certified-operators-2c9gf\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.224750 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-utilities\") pod \"certified-operators-2c9gf\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.245589 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfm59\" (UniqueName: \"kubernetes.io/projected/faeed815-483a-4f4c-afce-47a0a35822a4-kube-api-access-rfm59\") pod \"certified-operators-2c9gf\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.409948 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:17 crc kubenswrapper[4803]: I0127 22:55:17.939779 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2c9gf"] Jan 27 22:55:18 crc kubenswrapper[4803]: I0127 22:55:18.877315 4803 generic.go:334] "Generic (PLEG): container finished" podID="faeed815-483a-4f4c-afce-47a0a35822a4" containerID="d100ca0a8ef298c38f5a30a7a973ddc927edaa5fc10f9c96548f18e5c78e4c99" exitCode=0 Jan 27 22:55:18 crc kubenswrapper[4803]: I0127 22:55:18.877424 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c9gf" event={"ID":"faeed815-483a-4f4c-afce-47a0a35822a4","Type":"ContainerDied","Data":"d100ca0a8ef298c38f5a30a7a973ddc927edaa5fc10f9c96548f18e5c78e4c99"} Jan 27 22:55:18 crc kubenswrapper[4803]: I0127 22:55:18.877643 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c9gf" event={"ID":"faeed815-483a-4f4c-afce-47a0a35822a4","Type":"ContainerStarted","Data":"5304b7b4a8676fb7b07f6e2a98327a818a4425a073ed77e80d63ca9a085061b9"} Jan 27 22:55:18 crc kubenswrapper[4803]: I0127 22:55:18.881788 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 22:55:19 crc kubenswrapper[4803]: I0127 22:55:19.890299 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c9gf" event={"ID":"faeed815-483a-4f4c-afce-47a0a35822a4","Type":"ContainerStarted","Data":"ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3"} Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.284574 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hg2h2"] Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.286875 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.360300 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg2h2"] Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.402248 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgw4f\" (UniqueName: \"kubernetes.io/projected/d6e32da0-91ce-49f6-8f4e-928b9fee6fdf-kube-api-access-mgw4f\") pod \"redhat-marketplace-hg2h2\" (UID: \"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf\") " pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.402460 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e32da0-91ce-49f6-8f4e-928b9fee6fdf-catalog-content\") pod \"redhat-marketplace-hg2h2\" (UID: \"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf\") " pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.402620 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e32da0-91ce-49f6-8f4e-928b9fee6fdf-utilities\") pod \"redhat-marketplace-hg2h2\" (UID: \"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf\") " pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.505943 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgw4f\" (UniqueName: \"kubernetes.io/projected/d6e32da0-91ce-49f6-8f4e-928b9fee6fdf-kube-api-access-mgw4f\") pod \"redhat-marketplace-hg2h2\" (UID: \"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf\") " pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.506433 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e32da0-91ce-49f6-8f4e-928b9fee6fdf-catalog-content\") pod \"redhat-marketplace-hg2h2\" (UID: \"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf\") " pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.506506 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e32da0-91ce-49f6-8f4e-928b9fee6fdf-utilities\") pod \"redhat-marketplace-hg2h2\" (UID: \"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf\") " pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.506975 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e32da0-91ce-49f6-8f4e-928b9fee6fdf-catalog-content\") pod \"redhat-marketplace-hg2h2\" (UID: \"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf\") " pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.507063 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e32da0-91ce-49f6-8f4e-928b9fee6fdf-utilities\") pod \"redhat-marketplace-hg2h2\" (UID: \"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf\") " pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.525891 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgw4f\" (UniqueName: \"kubernetes.io/projected/d6e32da0-91ce-49f6-8f4e-928b9fee6fdf-kube-api-access-mgw4f\") pod \"redhat-marketplace-hg2h2\" (UID: \"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf\") " pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:20 crc kubenswrapper[4803]: I0127 22:55:20.620582 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:21 crc kubenswrapper[4803]: W0127 22:55:21.117984 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e32da0_91ce_49f6_8f4e_928b9fee6fdf.slice/crio-4947c0f2c2173482754616547fc68e74dd98af8a3ab9544aa1373fa380b8cba0 WatchSource:0}: Error finding container 4947c0f2c2173482754616547fc68e74dd98af8a3ab9544aa1373fa380b8cba0: Status 404 returned error can't find the container with id 4947c0f2c2173482754616547fc68e74dd98af8a3ab9544aa1373fa380b8cba0 Jan 27 22:55:21 crc kubenswrapper[4803]: I0127 22:55:21.118709 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg2h2"] Jan 27 22:55:21 crc kubenswrapper[4803]: E0127 22:55:21.647974 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e32da0_91ce_49f6_8f4e_928b9fee6fdf.slice/crio-550a17de2d4815c0c6744bc7e795ad81af74e847c1580ca04331414022eeddce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e32da0_91ce_49f6_8f4e_928b9fee6fdf.slice/crio-conmon-550a17de2d4815c0c6744bc7e795ad81af74e847c1580ca04331414022eeddce.scope\": RecentStats: unable to find data in memory cache]" Jan 27 22:55:21 crc kubenswrapper[4803]: I0127 22:55:21.912687 4803 generic.go:334] "Generic (PLEG): container finished" podID="faeed815-483a-4f4c-afce-47a0a35822a4" containerID="ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3" exitCode=0 Jan 27 22:55:21 crc kubenswrapper[4803]: I0127 22:55:21.912753 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c9gf" event={"ID":"faeed815-483a-4f4c-afce-47a0a35822a4","Type":"ContainerDied","Data":"ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3"} Jan 27 22:55:21 crc kubenswrapper[4803]: I0127 22:55:21.914824 4803 generic.go:334] "Generic (PLEG): container finished" podID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerID="550a17de2d4815c0c6744bc7e795ad81af74e847c1580ca04331414022eeddce" exitCode=0 Jan 27 22:55:21 crc kubenswrapper[4803]: I0127 22:55:21.914889 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg2h2" event={"ID":"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf","Type":"ContainerDied","Data":"550a17de2d4815c0c6744bc7e795ad81af74e847c1580ca04331414022eeddce"} Jan 27 22:55:21 crc kubenswrapper[4803]: I0127 22:55:21.914915 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg2h2" event={"ID":"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf","Type":"ContainerStarted","Data":"4947c0f2c2173482754616547fc68e74dd98af8a3ab9544aa1373fa380b8cba0"} Jan 27 22:55:23 crc kubenswrapper[4803]: I0127 22:55:23.945018 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c9gf" event={"ID":"faeed815-483a-4f4c-afce-47a0a35822a4","Type":"ContainerStarted","Data":"609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6"} Jan 27 22:55:23 crc kubenswrapper[4803]: I0127 22:55:23.963321 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2c9gf" podStartSLOduration=3.518656109 podStartE2EDuration="6.963303416s" podCreationTimestamp="2026-01-27 22:55:17 +0000 UTC" firstStartedPulling="2026-01-27 22:55:18.881491042 +0000 UTC m=+4071.297512741" lastFinishedPulling="2026-01-27 22:55:22.326138329 +0000 UTC m=+4074.742160048" observedRunningTime="2026-01-27 22:55:23.962028741 +0000 UTC m=+4076.378050450" watchObservedRunningTime="2026-01-27 22:55:23.963303416 +0000 UTC m=+4076.379325115" Jan 27 22:55:26 crc kubenswrapper[4803]: I0127 22:55:26.981027 4803 generic.go:334] "Generic (PLEG): container finished" podID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerID="0e2cfcfc8e2543d36ce29cda6b4a5b300c4b915bfbc44cc8ec100b17e7a3bac8" exitCode=0 Jan 27 22:55:26 crc kubenswrapper[4803]: I0127 22:55:26.981092 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg2h2" event={"ID":"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf","Type":"ContainerDied","Data":"0e2cfcfc8e2543d36ce29cda6b4a5b300c4b915bfbc44cc8ec100b17e7a3bac8"} Jan 27 22:55:27 crc kubenswrapper[4803]: I0127 22:55:27.410980 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:27 crc kubenswrapper[4803]: I0127 22:55:27.411029 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:28 crc kubenswrapper[4803]: I0127 22:55:28.466661 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2c9gf" podUID="faeed815-483a-4f4c-afce-47a0a35822a4" containerName="registry-server" probeResult="failure" output=< Jan 27 22:55:28 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 22:55:28 crc kubenswrapper[4803]: > Jan 27 22:55:29 crc kubenswrapper[4803]: I0127 22:55:29.037142 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg2h2" event={"ID":"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf","Type":"ContainerStarted","Data":"d374519d7ca3d13e35d08fadcf3fbdfacddd14cf09ffacc72c3799812099cd9f"} Jan 27 22:55:29 crc kubenswrapper[4803]: I0127 22:55:29.061421 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hg2h2" podStartSLOduration=3.166505222 podStartE2EDuration="9.061401121s" podCreationTimestamp="2026-01-27 22:55:20 +0000 UTC" firstStartedPulling="2026-01-27 22:55:21.93302301 +0000 UTC m=+4074.349044709" lastFinishedPulling="2026-01-27 22:55:27.827918909 +0000 UTC m=+4080.243940608" observedRunningTime="2026-01-27 22:55:29.059584802 +0000 UTC m=+4081.475606521" watchObservedRunningTime="2026-01-27 22:55:29.061401121 +0000 UTC m=+4081.477422820" Jan 27 22:55:30 crc kubenswrapper[4803]: I0127 22:55:30.620757 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:30 crc kubenswrapper[4803]: I0127 22:55:30.621504 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:30 crc kubenswrapper[4803]: I0127 22:55:30.671377 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:37 crc kubenswrapper[4803]: I0127 22:55:37.465180 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:37 crc kubenswrapper[4803]: I0127 22:55:37.526526 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:37 crc kubenswrapper[4803]: I0127 22:55:37.715682 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2c9gf"] Jan 27 22:55:39 crc kubenswrapper[4803]: I0127 22:55:39.137453 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2c9gf" podUID="faeed815-483a-4f4c-afce-47a0a35822a4" containerName="registry-server" containerID="cri-o://609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6" gracePeriod=2 Jan 27 22:55:39 crc kubenswrapper[4803]: I0127 22:55:39.666422 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:39 crc kubenswrapper[4803]: I0127 22:55:39.691750 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-catalog-content\") pod \"faeed815-483a-4f4c-afce-47a0a35822a4\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " Jan 27 22:55:39 crc kubenswrapper[4803]: I0127 22:55:39.691957 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfm59\" (UniqueName: \"kubernetes.io/projected/faeed815-483a-4f4c-afce-47a0a35822a4-kube-api-access-rfm59\") pod \"faeed815-483a-4f4c-afce-47a0a35822a4\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " Jan 27 22:55:39 crc kubenswrapper[4803]: I0127 22:55:39.692160 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-utilities\") pod \"faeed815-483a-4f4c-afce-47a0a35822a4\" (UID: \"faeed815-483a-4f4c-afce-47a0a35822a4\") " Jan 27 22:55:39 crc kubenswrapper[4803]: I0127 22:55:39.693682 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-utilities" (OuterVolumeSpecName: "utilities") pod "faeed815-483a-4f4c-afce-47a0a35822a4" (UID: "faeed815-483a-4f4c-afce-47a0a35822a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:55:39 crc kubenswrapper[4803]: I0127 22:55:39.699561 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faeed815-483a-4f4c-afce-47a0a35822a4-kube-api-access-rfm59" (OuterVolumeSpecName: "kube-api-access-rfm59") pod "faeed815-483a-4f4c-afce-47a0a35822a4" (UID: "faeed815-483a-4f4c-afce-47a0a35822a4"). InnerVolumeSpecName "kube-api-access-rfm59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:55:39 crc kubenswrapper[4803]: I0127 22:55:39.739996 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faeed815-483a-4f4c-afce-47a0a35822a4" (UID: "faeed815-483a-4f4c-afce-47a0a35822a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:55:39 crc kubenswrapper[4803]: I0127 22:55:39.796118 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:55:39 crc kubenswrapper[4803]: I0127 22:55:39.796170 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faeed815-483a-4f4c-afce-47a0a35822a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:55:39 crc kubenswrapper[4803]: I0127 22:55:39.796192 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfm59\" (UniqueName: \"kubernetes.io/projected/faeed815-483a-4f4c-afce-47a0a35822a4-kube-api-access-rfm59\") on node \"crc\" DevicePath \"\"" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.149420 4803 generic.go:334] "Generic (PLEG): container finished" podID="faeed815-483a-4f4c-afce-47a0a35822a4" containerID="609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6" exitCode=0 Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.149468 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c9gf" event={"ID":"faeed815-483a-4f4c-afce-47a0a35822a4","Type":"ContainerDied","Data":"609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6"} Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.149486 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c9gf" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.149506 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c9gf" event={"ID":"faeed815-483a-4f4c-afce-47a0a35822a4","Type":"ContainerDied","Data":"5304b7b4a8676fb7b07f6e2a98327a818a4425a073ed77e80d63ca9a085061b9"} Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.149526 4803 scope.go:117] "RemoveContainer" containerID="609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.177945 4803 scope.go:117] "RemoveContainer" containerID="ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.187453 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2c9gf"] Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.198921 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2c9gf"] Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.219160 4803 scope.go:117] "RemoveContainer" containerID="d100ca0a8ef298c38f5a30a7a973ddc927edaa5fc10f9c96548f18e5c78e4c99" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.262329 4803 scope.go:117] "RemoveContainer" containerID="609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6" Jan 27 22:55:40 crc kubenswrapper[4803]: E0127 22:55:40.262815 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6\": container with ID starting with 609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6 not found: ID does not exist" containerID="609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.262897 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6"} err="failed to get container status \"609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6\": rpc error: code = NotFound desc = could not find container \"609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6\": container with ID starting with 609324d8f3d6f41c87bef0c98148364cf57907cc4e4eff1b3ca4bd71350e40b6 not found: ID does not exist" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.262942 4803 scope.go:117] "RemoveContainer" containerID="ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3" Jan 27 22:55:40 crc kubenswrapper[4803]: E0127 22:55:40.263285 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3\": container with ID starting with ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3 not found: ID does not exist" containerID="ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.263314 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3"} err="failed to get container status \"ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3\": rpc error: code = NotFound desc = could not find container \"ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3\": container with ID starting with ef7dee918f9bdb481a450350db03626991246bc6a8c51b0261c887ec172963f3 not found: ID does not exist" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.263332 4803 scope.go:117] "RemoveContainer" containerID="d100ca0a8ef298c38f5a30a7a973ddc927edaa5fc10f9c96548f18e5c78e4c99" Jan 27 22:55:40 crc kubenswrapper[4803]: E0127 22:55:40.263712 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d100ca0a8ef298c38f5a30a7a973ddc927edaa5fc10f9c96548f18e5c78e4c99\": container with ID starting with d100ca0a8ef298c38f5a30a7a973ddc927edaa5fc10f9c96548f18e5c78e4c99 not found: ID does not exist" containerID="d100ca0a8ef298c38f5a30a7a973ddc927edaa5fc10f9c96548f18e5c78e4c99" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.263736 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d100ca0a8ef298c38f5a30a7a973ddc927edaa5fc10f9c96548f18e5c78e4c99"} err="failed to get container status \"d100ca0a8ef298c38f5a30a7a973ddc927edaa5fc10f9c96548f18e5c78e4c99\": rpc error: code = NotFound desc = could not find container \"d100ca0a8ef298c38f5a30a7a973ddc927edaa5fc10f9c96548f18e5c78e4c99\": container with ID starting with d100ca0a8ef298c38f5a30a7a973ddc927edaa5fc10f9c96548f18e5c78e4c99 not found: ID does not exist" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.320637 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faeed815-483a-4f4c-afce-47a0a35822a4" path="/var/lib/kubelet/pods/faeed815-483a-4f4c-afce-47a0a35822a4/volumes" Jan 27 22:55:40 crc kubenswrapper[4803]: I0127 22:55:40.669145 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 22:55:42 crc kubenswrapper[4803]: I0127 22:55:42.566543 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg2h2"] Jan 27 22:55:42 crc kubenswrapper[4803]: I0127 22:55:42.914285 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4445"] Jan 27 22:55:42 crc kubenswrapper[4803]: I0127 22:55:42.915176 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j4445" podUID="b99815d1-e732-429a-afb0-7e2328eb4a80" containerName="registry-server" containerID="cri-o://aefcc1a457f81117e6f31fdcde6645a472bccf963166b93fe6882798be05e1ea" gracePeriod=2 Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.197956 4803 generic.go:334] "Generic (PLEG): container finished" podID="b99815d1-e732-429a-afb0-7e2328eb4a80" containerID="aefcc1a457f81117e6f31fdcde6645a472bccf963166b93fe6882798be05e1ea" exitCode=0 Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.198003 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4445" event={"ID":"b99815d1-e732-429a-afb0-7e2328eb4a80","Type":"ContainerDied","Data":"aefcc1a457f81117e6f31fdcde6645a472bccf963166b93fe6882798be05e1ea"} Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.454128 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.496408 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-catalog-content\") pod \"b99815d1-e732-429a-afb0-7e2328eb4a80\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.496513 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w7mp\" (UniqueName: \"kubernetes.io/projected/b99815d1-e732-429a-afb0-7e2328eb4a80-kube-api-access-9w7mp\") pod \"b99815d1-e732-429a-afb0-7e2328eb4a80\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.496701 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-utilities\") pod \"b99815d1-e732-429a-afb0-7e2328eb4a80\" (UID: \"b99815d1-e732-429a-afb0-7e2328eb4a80\") " Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.498920 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-utilities" (OuterVolumeSpecName: "utilities") pod "b99815d1-e732-429a-afb0-7e2328eb4a80" (UID: "b99815d1-e732-429a-afb0-7e2328eb4a80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.508503 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99815d1-e732-429a-afb0-7e2328eb4a80-kube-api-access-9w7mp" (OuterVolumeSpecName: "kube-api-access-9w7mp") pod "b99815d1-e732-429a-afb0-7e2328eb4a80" (UID: "b99815d1-e732-429a-afb0-7e2328eb4a80"). InnerVolumeSpecName "kube-api-access-9w7mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.534645 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b99815d1-e732-429a-afb0-7e2328eb4a80" (UID: "b99815d1-e732-429a-afb0-7e2328eb4a80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.600326 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.600372 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b99815d1-e732-429a-afb0-7e2328eb4a80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:55:43 crc kubenswrapper[4803]: I0127 22:55:43.600388 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w7mp\" (UniqueName: \"kubernetes.io/projected/b99815d1-e732-429a-afb0-7e2328eb4a80-kube-api-access-9w7mp\") on node \"crc\" DevicePath \"\"" Jan 27 22:55:44 crc kubenswrapper[4803]: I0127 22:55:44.210013 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4445" event={"ID":"b99815d1-e732-429a-afb0-7e2328eb4a80","Type":"ContainerDied","Data":"37d22dfdc90a75a211ecda61a0f832de1edbbaf642d329c650bc69d9e4227cc9"} Jan 27 22:55:44 crc kubenswrapper[4803]: I0127 22:55:44.210334 4803 scope.go:117] "RemoveContainer" containerID="aefcc1a457f81117e6f31fdcde6645a472bccf963166b93fe6882798be05e1ea" Jan 27 22:55:44 crc kubenswrapper[4803]: I0127 22:55:44.210095 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4445" Jan 27 22:55:44 crc kubenswrapper[4803]: I0127 22:55:44.248356 4803 scope.go:117] "RemoveContainer" containerID="45513cf634b4312942936a5552bcd8db7eb718723dd98db65ba798692f19c45b" Jan 27 22:55:44 crc kubenswrapper[4803]: I0127 22:55:44.248482 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4445"] Jan 27 22:55:44 crc kubenswrapper[4803]: I0127 22:55:44.259321 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4445"] Jan 27 22:55:44 crc kubenswrapper[4803]: I0127 22:55:44.277482 4803 scope.go:117] "RemoveContainer" containerID="6db437d47fe8f0e6877748b0e54194c3a4e430dec6e10e01ae703c743350f23f" Jan 27 22:55:44 crc kubenswrapper[4803]: I0127 22:55:44.336931 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b99815d1-e732-429a-afb0-7e2328eb4a80" path="/var/lib/kubelet/pods/b99815d1-e732-429a-afb0-7e2328eb4a80/volumes" Jan 27 22:55:46 crc kubenswrapper[4803]: I0127 22:55:46.343226 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:55:46 crc kubenswrapper[4803]: I0127 22:55:46.343570 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:55:46 crc kubenswrapper[4803]: I0127 22:55:46.343612 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:55:46 crc kubenswrapper[4803]: I0127 22:55:46.344464 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee4ce493c1e5d5c7ba473144f65c9a2ec956a1a31df6273a4a67a184d07d2e5a"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:55:46 crc kubenswrapper[4803]: I0127 22:55:46.344527 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://ee4ce493c1e5d5c7ba473144f65c9a2ec956a1a31df6273a4a67a184d07d2e5a" gracePeriod=600 Jan 27 22:55:47 crc kubenswrapper[4803]: I0127 22:55:47.242564 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="ee4ce493c1e5d5c7ba473144f65c9a2ec956a1a31df6273a4a67a184d07d2e5a" exitCode=0 Jan 27 22:55:47 crc kubenswrapper[4803]: I0127 22:55:47.242637 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"ee4ce493c1e5d5c7ba473144f65c9a2ec956a1a31df6273a4a67a184d07d2e5a"} Jan 27 22:55:47 crc kubenswrapper[4803]: I0127 22:55:47.243102 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771"} Jan 27 22:55:47 crc kubenswrapper[4803]: I0127 22:55:47.243124 4803 scope.go:117] "RemoveContainer" containerID="c7d07e5a78ca8dc434afa754820bfa8cac06287c13070177291759c5e1dbd7b0" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.621875 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cwt95"] Jan 27 22:56:31 crc kubenswrapper[4803]: E0127 22:56:31.625008 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faeed815-483a-4f4c-afce-47a0a35822a4" containerName="registry-server" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.625142 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="faeed815-483a-4f4c-afce-47a0a35822a4" containerName="registry-server" Jan 27 22:56:31 crc kubenswrapper[4803]: E0127 22:56:31.625238 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faeed815-483a-4f4c-afce-47a0a35822a4" containerName="extract-content" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.625317 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="faeed815-483a-4f4c-afce-47a0a35822a4" containerName="extract-content" Jan 27 22:56:31 crc kubenswrapper[4803]: E0127 22:56:31.625610 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99815d1-e732-429a-afb0-7e2328eb4a80" containerName="registry-server" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.625715 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99815d1-e732-429a-afb0-7e2328eb4a80" containerName="registry-server" Jan 27 22:56:31 crc kubenswrapper[4803]: E0127 22:56:31.625827 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99815d1-e732-429a-afb0-7e2328eb4a80" containerName="extract-utilities" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.625934 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99815d1-e732-429a-afb0-7e2328eb4a80" containerName="extract-utilities" Jan 27 22:56:31 crc kubenswrapper[4803]: E0127 22:56:31.626026 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99815d1-e732-429a-afb0-7e2328eb4a80" containerName="extract-content" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.626104 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99815d1-e732-429a-afb0-7e2328eb4a80" containerName="extract-content" Jan 27 22:56:31 crc kubenswrapper[4803]: E0127 22:56:31.626268 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faeed815-483a-4f4c-afce-47a0a35822a4" containerName="extract-utilities" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.626355 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="faeed815-483a-4f4c-afce-47a0a35822a4" containerName="extract-utilities" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.626798 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="faeed815-483a-4f4c-afce-47a0a35822a4" containerName="registry-server" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.626945 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99815d1-e732-429a-afb0-7e2328eb4a80" containerName="registry-server" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.629201 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.637343 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwt95"] Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.739342 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1088c904-bd11-410d-963b-91425f9e2ee1-utilities\") pod \"redhat-operators-cwt95\" (UID: \"1088c904-bd11-410d-963b-91425f9e2ee1\") " pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.739407 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1088c904-bd11-410d-963b-91425f9e2ee1-catalog-content\") pod \"redhat-operators-cwt95\" (UID: \"1088c904-bd11-410d-963b-91425f9e2ee1\") " pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.739595 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gmwb\" (UniqueName: \"kubernetes.io/projected/1088c904-bd11-410d-963b-91425f9e2ee1-kube-api-access-9gmwb\") pod \"redhat-operators-cwt95\" (UID: \"1088c904-bd11-410d-963b-91425f9e2ee1\") " pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.841578 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1088c904-bd11-410d-963b-91425f9e2ee1-utilities\") pod \"redhat-operators-cwt95\" (UID: \"1088c904-bd11-410d-963b-91425f9e2ee1\") " pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.841645 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1088c904-bd11-410d-963b-91425f9e2ee1-catalog-content\") pod \"redhat-operators-cwt95\" (UID: \"1088c904-bd11-410d-963b-91425f9e2ee1\") " pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.841750 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gmwb\" (UniqueName: \"kubernetes.io/projected/1088c904-bd11-410d-963b-91425f9e2ee1-kube-api-access-9gmwb\") pod \"redhat-operators-cwt95\" (UID: \"1088c904-bd11-410d-963b-91425f9e2ee1\") " pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.842191 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1088c904-bd11-410d-963b-91425f9e2ee1-utilities\") pod \"redhat-operators-cwt95\" (UID: \"1088c904-bd11-410d-963b-91425f9e2ee1\") " pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:31 crc kubenswrapper[4803]: I0127 22:56:31.842253 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1088c904-bd11-410d-963b-91425f9e2ee1-catalog-content\") pod \"redhat-operators-cwt95\" (UID: \"1088c904-bd11-410d-963b-91425f9e2ee1\") " pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:32 crc kubenswrapper[4803]: I0127 22:56:32.186249 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gmwb\" (UniqueName: \"kubernetes.io/projected/1088c904-bd11-410d-963b-91425f9e2ee1-kube-api-access-9gmwb\") pod \"redhat-operators-cwt95\" (UID: \"1088c904-bd11-410d-963b-91425f9e2ee1\") " pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:32 crc kubenswrapper[4803]: I0127 22:56:32.276500 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:32 crc kubenswrapper[4803]: I0127 22:56:32.800887 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwt95"] Jan 27 22:56:33 crc kubenswrapper[4803]: I0127 22:56:33.489801 4803 generic.go:334] "Generic (PLEG): container finished" podID="1088c904-bd11-410d-963b-91425f9e2ee1" containerID="c4d5851595e32a720b636420caeed273ec4eeef6af97fb809416ee42ffe3ea65" exitCode=0 Jan 27 22:56:33 crc kubenswrapper[4803]: I0127 22:56:33.489964 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwt95" event={"ID":"1088c904-bd11-410d-963b-91425f9e2ee1","Type":"ContainerDied","Data":"c4d5851595e32a720b636420caeed273ec4eeef6af97fb809416ee42ffe3ea65"} Jan 27 22:56:33 crc kubenswrapper[4803]: I0127 22:56:33.490084 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwt95" event={"ID":"1088c904-bd11-410d-963b-91425f9e2ee1","Type":"ContainerStarted","Data":"2218b9d2b04b32174142e4c72539f18e896e54e23a1636a1993082ccf6588d73"} Jan 27 22:56:43 crc kubenswrapper[4803]: I0127 22:56:43.641703 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwt95" event={"ID":"1088c904-bd11-410d-963b-91425f9e2ee1","Type":"ContainerStarted","Data":"aaf24fecdbe8097c13aa9bd5bc395e15dff0e07bec4c4e901cdbab23e5e976d3"} Jan 27 22:56:45 crc kubenswrapper[4803]: I0127 22:56:45.671286 4803 generic.go:334] "Generic (PLEG): container finished" podID="1088c904-bd11-410d-963b-91425f9e2ee1" containerID="aaf24fecdbe8097c13aa9bd5bc395e15dff0e07bec4c4e901cdbab23e5e976d3" exitCode=0 Jan 27 22:56:45 crc kubenswrapper[4803]: I0127 22:56:45.671382 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwt95" event={"ID":"1088c904-bd11-410d-963b-91425f9e2ee1","Type":"ContainerDied","Data":"aaf24fecdbe8097c13aa9bd5bc395e15dff0e07bec4c4e901cdbab23e5e976d3"} Jan 27 22:56:46 crc kubenswrapper[4803]: I0127 22:56:46.683052 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwt95" event={"ID":"1088c904-bd11-410d-963b-91425f9e2ee1","Type":"ContainerStarted","Data":"a260052a8bfa059e6b602e68e3e887eb032734c2c3b3e5211167b4fa88cd54e3"} Jan 27 22:56:46 crc kubenswrapper[4803]: I0127 22:56:46.708122 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cwt95" podStartSLOduration=3.115477191 podStartE2EDuration="15.708105048s" podCreationTimestamp="2026-01-27 22:56:31 +0000 UTC" firstStartedPulling="2026-01-27 22:56:33.492742174 +0000 UTC m=+4145.908763873" lastFinishedPulling="2026-01-27 22:56:46.085370031 +0000 UTC m=+4158.501391730" observedRunningTime="2026-01-27 22:56:46.702487656 +0000 UTC m=+4159.118509355" watchObservedRunningTime="2026-01-27 22:56:46.708105048 +0000 UTC m=+4159.124126747" Jan 27 22:56:52 crc kubenswrapper[4803]: I0127 22:56:52.277137 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:52 crc kubenswrapper[4803]: I0127 22:56:52.283734 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:52 crc kubenswrapper[4803]: I0127 22:56:52.352393 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:52 crc kubenswrapper[4803]: I0127 22:56:52.812892 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cwt95" Jan 27 22:56:52 crc kubenswrapper[4803]: I0127 22:56:52.878348 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwt95"] Jan 27 22:56:52 crc kubenswrapper[4803]: I0127 22:56:52.931153 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knvxh"] Jan 27 22:56:52 crc kubenswrapper[4803]: I0127 22:56:52.931533 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-knvxh" podUID="0b16bfe1-a641-480e-aef3-9217bd7f8842" containerName="registry-server" containerID="cri-o://aa88b3ba9fb2f6029b80b664897c036df8ac48b6e29ecdaa2db6e5b76c839f90" gracePeriod=2 Jan 27 22:56:53 crc kubenswrapper[4803]: I0127 22:56:53.755809 4803 generic.go:334] "Generic (PLEG): container finished" podID="0b16bfe1-a641-480e-aef3-9217bd7f8842" containerID="aa88b3ba9fb2f6029b80b664897c036df8ac48b6e29ecdaa2db6e5b76c839f90" exitCode=0 Jan 27 22:56:53 crc kubenswrapper[4803]: I0127 22:56:53.756012 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvxh" event={"ID":"0b16bfe1-a641-480e-aef3-9217bd7f8842","Type":"ContainerDied","Data":"aa88b3ba9fb2f6029b80b664897c036df8ac48b6e29ecdaa2db6e5b76c839f90"} Jan 27 22:56:53 crc kubenswrapper[4803]: I0127 22:56:53.756404 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvxh" event={"ID":"0b16bfe1-a641-480e-aef3-9217bd7f8842","Type":"ContainerDied","Data":"50bd4c303fa4f87b888095e7dcce7d6bedf8dac02dfb3224ac51f61ce19a0d04"} Jan 27 22:56:53 crc kubenswrapper[4803]: I0127 22:56:53.756421 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50bd4c303fa4f87b888095e7dcce7d6bedf8dac02dfb3224ac51f61ce19a0d04" Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.362543 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.520659 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-catalog-content\") pod \"0b16bfe1-a641-480e-aef3-9217bd7f8842\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.520737 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hznsm\" (UniqueName: \"kubernetes.io/projected/0b16bfe1-a641-480e-aef3-9217bd7f8842-kube-api-access-hznsm\") pod \"0b16bfe1-a641-480e-aef3-9217bd7f8842\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.521058 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-utilities\") pod \"0b16bfe1-a641-480e-aef3-9217bd7f8842\" (UID: \"0b16bfe1-a641-480e-aef3-9217bd7f8842\") " Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.521710 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-utilities" (OuterVolumeSpecName: "utilities") pod "0b16bfe1-a641-480e-aef3-9217bd7f8842" (UID: "0b16bfe1-a641-480e-aef3-9217bd7f8842"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.522071 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.529806 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b16bfe1-a641-480e-aef3-9217bd7f8842-kube-api-access-hznsm" (OuterVolumeSpecName: "kube-api-access-hznsm") pod "0b16bfe1-a641-480e-aef3-9217bd7f8842" (UID: "0b16bfe1-a641-480e-aef3-9217bd7f8842"). InnerVolumeSpecName "kube-api-access-hznsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.624733 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hznsm\" (UniqueName: \"kubernetes.io/projected/0b16bfe1-a641-480e-aef3-9217bd7f8842-kube-api-access-hznsm\") on node \"crc\" DevicePath \"\"" Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.631073 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b16bfe1-a641-480e-aef3-9217bd7f8842" (UID: "0b16bfe1-a641-480e-aef3-9217bd7f8842"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.726990 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b16bfe1-a641-480e-aef3-9217bd7f8842-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.789234 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knvxh" Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.840367 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knvxh"] Jan 27 22:56:54 crc kubenswrapper[4803]: I0127 22:56:54.853084 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-knvxh"] Jan 27 22:56:56 crc kubenswrapper[4803]: I0127 22:56:56.318582 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b16bfe1-a641-480e-aef3-9217bd7f8842" path="/var/lib/kubelet/pods/0b16bfe1-a641-480e-aef3-9217bd7f8842/volumes" Jan 27 22:57:35 crc kubenswrapper[4803]: I0127 22:57:35.612769 4803 scope.go:117] "RemoveContainer" containerID="aa88b3ba9fb2f6029b80b664897c036df8ac48b6e29ecdaa2db6e5b76c839f90" Jan 27 22:57:35 crc kubenswrapper[4803]: I0127 22:57:35.638280 4803 scope.go:117] "RemoveContainer" containerID="6bd1ebdc6582812beb55a292517e8ebb9cc67e9dfd34127182fa5425b816a6de" Jan 27 22:57:35 crc kubenswrapper[4803]: I0127 22:57:35.668251 4803 scope.go:117] "RemoveContainer" containerID="2cbe96141128726d54bbac41be7ebec0f46dc3c57f3767f62c00be824457febb" Jan 27 22:57:46 crc kubenswrapper[4803]: I0127 22:57:46.343431 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:57:46 crc kubenswrapper[4803]: I0127 22:57:46.344065 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:58:16 crc kubenswrapper[4803]: I0127 22:58:16.343523 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:58:16 crc kubenswrapper[4803]: I0127 22:58:16.344199 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:58:46 crc kubenswrapper[4803]: I0127 22:58:46.343503 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:58:46 crc kubenswrapper[4803]: I0127 22:58:46.344059 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:58:46 crc kubenswrapper[4803]: I0127 22:58:46.344108 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 22:58:46 crc kubenswrapper[4803]: I0127 22:58:46.344974 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:58:46 crc kubenswrapper[4803]: I0127 22:58:46.345032 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" gracePeriod=600 Jan 27 22:58:46 crc kubenswrapper[4803]: E0127 22:58:46.474590 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:58:47 crc kubenswrapper[4803]: I0127 22:58:47.050287 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" exitCode=0 Jan 27 22:58:47 crc kubenswrapper[4803]: I0127 22:58:47.050403 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771"} Jan 27 22:58:47 crc kubenswrapper[4803]: I0127 22:58:47.050572 4803 scope.go:117] "RemoveContainer" containerID="ee4ce493c1e5d5c7ba473144f65c9a2ec956a1a31df6273a4a67a184d07d2e5a" Jan 27 22:58:47 crc kubenswrapper[4803]: I0127 22:58:47.052448 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 22:58:47 crc kubenswrapper[4803]: E0127 22:58:47.053244 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:58:59 crc kubenswrapper[4803]: I0127 22:58:59.320113 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 22:58:59 crc kubenswrapper[4803]: E0127 22:58:59.321770 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:59:14 crc kubenswrapper[4803]: I0127 22:59:14.307623 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 22:59:14 crc kubenswrapper[4803]: E0127 22:59:14.310611 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:59:27 crc kubenswrapper[4803]: I0127 22:59:27.308023 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 22:59:27 crc kubenswrapper[4803]: E0127 22:59:27.308895 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:59:41 crc kubenswrapper[4803]: I0127 22:59:41.308058 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 22:59:41 crc kubenswrapper[4803]: E0127 22:59:41.308928 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 22:59:53 crc kubenswrapper[4803]: I0127 22:59:53.307253 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 22:59:53 crc kubenswrapper[4803]: E0127 22:59:53.309347 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.206331 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd"] Jan 27 23:00:00 crc kubenswrapper[4803]: E0127 23:00:00.207439 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b16bfe1-a641-480e-aef3-9217bd7f8842" containerName="registry-server" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.207456 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b16bfe1-a641-480e-aef3-9217bd7f8842" containerName="registry-server" Jan 27 23:00:00 crc kubenswrapper[4803]: E0127 23:00:00.207506 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b16bfe1-a641-480e-aef3-9217bd7f8842" containerName="extract-utilities" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.207512 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b16bfe1-a641-480e-aef3-9217bd7f8842" containerName="extract-utilities" Jan 27 23:00:00 crc kubenswrapper[4803]: E0127 23:00:00.207529 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b16bfe1-a641-480e-aef3-9217bd7f8842" containerName="extract-content" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.207536 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b16bfe1-a641-480e-aef3-9217bd7f8842" containerName="extract-content" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.207762 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b16bfe1-a641-480e-aef3-9217bd7f8842" containerName="registry-server" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.208738 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.211732 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.211761 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.219860 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd"] Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.320338 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/214172ed-b099-4e45-b3cd-e1b86d3309ae-secret-volume\") pod \"collect-profiles-29492580-n8wmd\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.320608 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/214172ed-b099-4e45-b3cd-e1b86d3309ae-config-volume\") pod \"collect-profiles-29492580-n8wmd\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.320733 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j5vd\" (UniqueName: \"kubernetes.io/projected/214172ed-b099-4e45-b3cd-e1b86d3309ae-kube-api-access-2j5vd\") pod \"collect-profiles-29492580-n8wmd\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.423260 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j5vd\" (UniqueName: \"kubernetes.io/projected/214172ed-b099-4e45-b3cd-e1b86d3309ae-kube-api-access-2j5vd\") pod \"collect-profiles-29492580-n8wmd\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.423335 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/214172ed-b099-4e45-b3cd-e1b86d3309ae-secret-volume\") pod \"collect-profiles-29492580-n8wmd\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.423518 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/214172ed-b099-4e45-b3cd-e1b86d3309ae-config-volume\") pod \"collect-profiles-29492580-n8wmd\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.424646 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/214172ed-b099-4e45-b3cd-e1b86d3309ae-config-volume\") pod \"collect-profiles-29492580-n8wmd\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.429509 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/214172ed-b099-4e45-b3cd-e1b86d3309ae-secret-volume\") pod \"collect-profiles-29492580-n8wmd\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.447604 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j5vd\" (UniqueName: \"kubernetes.io/projected/214172ed-b099-4e45-b3cd-e1b86d3309ae-kube-api-access-2j5vd\") pod \"collect-profiles-29492580-n8wmd\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.542560 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:00 crc kubenswrapper[4803]: I0127 23:00:00.993537 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd"] Jan 27 23:00:01 crc kubenswrapper[4803]: I0127 23:00:01.834822 4803 generic.go:334] "Generic (PLEG): container finished" podID="214172ed-b099-4e45-b3cd-e1b86d3309ae" containerID="f5d8ceda5910d99a61481ca832cbb7dc979f57b7f431b208eb8975b4f25b94d3" exitCode=0 Jan 27 23:00:01 crc kubenswrapper[4803]: I0127 23:00:01.835002 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" event={"ID":"214172ed-b099-4e45-b3cd-e1b86d3309ae","Type":"ContainerDied","Data":"f5d8ceda5910d99a61481ca832cbb7dc979f57b7f431b208eb8975b4f25b94d3"} Jan 27 23:00:01 crc kubenswrapper[4803]: I0127 23:00:01.835092 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" event={"ID":"214172ed-b099-4e45-b3cd-e1b86d3309ae","Type":"ContainerStarted","Data":"97656e5002c8056be9144cad00c2356db73f52bad8395c4fc97ce63d8079257e"} Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.248426 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.411420 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/214172ed-b099-4e45-b3cd-e1b86d3309ae-config-volume\") pod \"214172ed-b099-4e45-b3cd-e1b86d3309ae\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.411872 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j5vd\" (UniqueName: \"kubernetes.io/projected/214172ed-b099-4e45-b3cd-e1b86d3309ae-kube-api-access-2j5vd\") pod \"214172ed-b099-4e45-b3cd-e1b86d3309ae\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.412115 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/214172ed-b099-4e45-b3cd-e1b86d3309ae-secret-volume\") pod \"214172ed-b099-4e45-b3cd-e1b86d3309ae\" (UID: \"214172ed-b099-4e45-b3cd-e1b86d3309ae\") " Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.412564 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214172ed-b099-4e45-b3cd-e1b86d3309ae-config-volume" (OuterVolumeSpecName: "config-volume") pod "214172ed-b099-4e45-b3cd-e1b86d3309ae" (UID: "214172ed-b099-4e45-b3cd-e1b86d3309ae"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.413522 4803 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/214172ed-b099-4e45-b3cd-e1b86d3309ae-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.418063 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214172ed-b099-4e45-b3cd-e1b86d3309ae-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "214172ed-b099-4e45-b3cd-e1b86d3309ae" (UID: "214172ed-b099-4e45-b3cd-e1b86d3309ae"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.418090 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214172ed-b099-4e45-b3cd-e1b86d3309ae-kube-api-access-2j5vd" (OuterVolumeSpecName: "kube-api-access-2j5vd") pod "214172ed-b099-4e45-b3cd-e1b86d3309ae" (UID: "214172ed-b099-4e45-b3cd-e1b86d3309ae"). InnerVolumeSpecName "kube-api-access-2j5vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.516034 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j5vd\" (UniqueName: \"kubernetes.io/projected/214172ed-b099-4e45-b3cd-e1b86d3309ae-kube-api-access-2j5vd\") on node \"crc\" DevicePath \"\"" Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.516073 4803 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/214172ed-b099-4e45-b3cd-e1b86d3309ae-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.859093 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" event={"ID":"214172ed-b099-4e45-b3cd-e1b86d3309ae","Type":"ContainerDied","Data":"97656e5002c8056be9144cad00c2356db73f52bad8395c4fc97ce63d8079257e"} Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.859146 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97656e5002c8056be9144cad00c2356db73f52bad8395c4fc97ce63d8079257e" Jan 27 23:00:03 crc kubenswrapper[4803]: I0127 23:00:03.859207 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492580-n8wmd" Jan 27 23:00:04 crc kubenswrapper[4803]: I0127 23:00:04.332200 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947"] Jan 27 23:00:04 crc kubenswrapper[4803]: I0127 23:00:04.341947 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492535-h7947"] Jan 27 23:00:05 crc kubenswrapper[4803]: I0127 23:00:05.307615 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:00:05 crc kubenswrapper[4803]: E0127 23:00:05.308290 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:00:06 crc kubenswrapper[4803]: I0127 23:00:06.325594 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c23005bb-85d7-416b-8668-522a0d5785cb" path="/var/lib/kubelet/pods/c23005bb-85d7-416b-8668-522a0d5785cb/volumes" Jan 27 23:00:20 crc kubenswrapper[4803]: I0127 23:00:20.308933 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:00:20 crc kubenswrapper[4803]: E0127 23:00:20.310645 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:00:35 crc kubenswrapper[4803]: I0127 23:00:35.306825 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:00:35 crc kubenswrapper[4803]: E0127 23:00:35.307795 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:00:35 crc kubenswrapper[4803]: I0127 23:00:35.823479 4803 scope.go:117] "RemoveContainer" containerID="f9514eaf305f5ff5c180fc8954bfce02502c59d7d0e93caf2f05cb079ecd5efb" Jan 27 23:00:49 crc kubenswrapper[4803]: I0127 23:00:49.309091 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:00:49 crc kubenswrapper[4803]: E0127 23:00:49.311037 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.147239 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492581-qjd4h"] Jan 27 23:01:00 crc kubenswrapper[4803]: E0127 23:01:00.148316 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214172ed-b099-4e45-b3cd-e1b86d3309ae" containerName="collect-profiles" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.148331 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="214172ed-b099-4e45-b3cd-e1b86d3309ae" containerName="collect-profiles" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.148592 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="214172ed-b099-4e45-b3cd-e1b86d3309ae" containerName="collect-profiles" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.149617 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.172962 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492581-qjd4h"] Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.276647 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-config-data\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.277027 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-fernet-keys\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.277095 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh4nc\" (UniqueName: \"kubernetes.io/projected/b3b21dcd-161c-4e90-adc7-292a7ff99d86-kube-api-access-vh4nc\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.277288 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-combined-ca-bundle\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.380175 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-config-data\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.380231 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-fernet-keys\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.380277 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh4nc\" (UniqueName: \"kubernetes.io/projected/b3b21dcd-161c-4e90-adc7-292a7ff99d86-kube-api-access-vh4nc\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.381206 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-combined-ca-bundle\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.386692 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-fernet-keys\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.386987 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-config-data\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.388460 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-combined-ca-bundle\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.399044 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh4nc\" (UniqueName: \"kubernetes.io/projected/b3b21dcd-161c-4e90-adc7-292a7ff99d86-kube-api-access-vh4nc\") pod \"keystone-cron-29492581-qjd4h\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.472328 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:00 crc kubenswrapper[4803]: I0127 23:01:00.944386 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492581-qjd4h"] Jan 27 23:01:01 crc kubenswrapper[4803]: I0127 23:01:01.481773 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492581-qjd4h" event={"ID":"b3b21dcd-161c-4e90-adc7-292a7ff99d86","Type":"ContainerStarted","Data":"cd5400de01cc45421f60af839591667267827eb77dd3f9c8d6b223cb9249b82b"} Jan 27 23:01:01 crc kubenswrapper[4803]: I0127 23:01:01.482166 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492581-qjd4h" event={"ID":"b3b21dcd-161c-4e90-adc7-292a7ff99d86","Type":"ContainerStarted","Data":"f315839ef8ef46dc86c09cb7326b193684dcc4de885742217d6a0359bde94cbe"} Jan 27 23:01:01 crc kubenswrapper[4803]: I0127 23:01:01.518367 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492581-qjd4h" podStartSLOduration=1.518343488 podStartE2EDuration="1.518343488s" podCreationTimestamp="2026-01-27 23:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 23:01:01.500276379 +0000 UTC m=+4413.916298088" watchObservedRunningTime="2026-01-27 23:01:01.518343488 +0000 UTC m=+4413.934365197" Jan 27 23:01:02 crc kubenswrapper[4803]: I0127 23:01:02.307357 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:01:02 crc kubenswrapper[4803]: E0127 23:01:02.308127 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:01:05 crc kubenswrapper[4803]: I0127 23:01:05.520718 4803 generic.go:334] "Generic (PLEG): container finished" podID="b3b21dcd-161c-4e90-adc7-292a7ff99d86" containerID="cd5400de01cc45421f60af839591667267827eb77dd3f9c8d6b223cb9249b82b" exitCode=0 Jan 27 23:01:05 crc kubenswrapper[4803]: I0127 23:01:05.520804 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492581-qjd4h" event={"ID":"b3b21dcd-161c-4e90-adc7-292a7ff99d86","Type":"ContainerDied","Data":"cd5400de01cc45421f60af839591667267827eb77dd3f9c8d6b223cb9249b82b"} Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.026595 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.166021 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-fernet-keys\") pod \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.166348 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh4nc\" (UniqueName: \"kubernetes.io/projected/b3b21dcd-161c-4e90-adc7-292a7ff99d86-kube-api-access-vh4nc\") pod \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.166519 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-combined-ca-bundle\") pod \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.166625 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-config-data\") pod \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\" (UID: \"b3b21dcd-161c-4e90-adc7-292a7ff99d86\") " Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.172466 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b21dcd-161c-4e90-adc7-292a7ff99d86-kube-api-access-vh4nc" (OuterVolumeSpecName: "kube-api-access-vh4nc") pod "b3b21dcd-161c-4e90-adc7-292a7ff99d86" (UID: "b3b21dcd-161c-4e90-adc7-292a7ff99d86"). InnerVolumeSpecName "kube-api-access-vh4nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.172567 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b3b21dcd-161c-4e90-adc7-292a7ff99d86" (UID: "b3b21dcd-161c-4e90-adc7-292a7ff99d86"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.269155 4803 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.269194 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh4nc\" (UniqueName: \"kubernetes.io/projected/b3b21dcd-161c-4e90-adc7-292a7ff99d86-kube-api-access-vh4nc\") on node \"crc\" DevicePath \"\"" Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.557249 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492581-qjd4h" event={"ID":"b3b21dcd-161c-4e90-adc7-292a7ff99d86","Type":"ContainerDied","Data":"f315839ef8ef46dc86c09cb7326b193684dcc4de885742217d6a0359bde94cbe"} Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.557287 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f315839ef8ef46dc86c09cb7326b193684dcc4de885742217d6a0359bde94cbe" Jan 27 23:01:07 crc kubenswrapper[4803]: I0127 23:01:07.557321 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492581-qjd4h" Jan 27 23:01:08 crc kubenswrapper[4803]: I0127 23:01:08.301823 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3b21dcd-161c-4e90-adc7-292a7ff99d86" (UID: "b3b21dcd-161c-4e90-adc7-292a7ff99d86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:01:08 crc kubenswrapper[4803]: I0127 23:01:08.334565 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-config-data" (OuterVolumeSpecName: "config-data") pod "b3b21dcd-161c-4e90-adc7-292a7ff99d86" (UID: "b3b21dcd-161c-4e90-adc7-292a7ff99d86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:01:08 crc kubenswrapper[4803]: I0127 23:01:08.401745 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 23:01:08 crc kubenswrapper[4803]: I0127 23:01:08.402126 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b21dcd-161c-4e90-adc7-292a7ff99d86-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 23:01:16 crc kubenswrapper[4803]: I0127 23:01:16.307613 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:01:16 crc kubenswrapper[4803]: E0127 23:01:16.308327 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:01:28 crc kubenswrapper[4803]: I0127 23:01:28.317203 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:01:28 crc kubenswrapper[4803]: E0127 23:01:28.317957 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:01:41 crc kubenswrapper[4803]: I0127 23:01:41.307617 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:01:41 crc kubenswrapper[4803]: E0127 23:01:41.308459 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:01:54 crc kubenswrapper[4803]: I0127 23:01:54.307522 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:01:54 crc kubenswrapper[4803]: E0127 23:01:54.308434 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:02:09 crc kubenswrapper[4803]: I0127 23:02:09.306993 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:02:09 crc kubenswrapper[4803]: E0127 23:02:09.307953 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.773427 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mw8t4"] Jan 27 23:02:11 crc kubenswrapper[4803]: E0127 23:02:11.774364 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b21dcd-161c-4e90-adc7-292a7ff99d86" containerName="keystone-cron" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.774646 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b21dcd-161c-4e90-adc7-292a7ff99d86" containerName="keystone-cron" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.775112 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b21dcd-161c-4e90-adc7-292a7ff99d86" containerName="keystone-cron" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.777349 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.791808 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mw8t4"] Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.895571 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-utilities\") pod \"community-operators-mw8t4\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.895704 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdddq\" (UniqueName: \"kubernetes.io/projected/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-kube-api-access-pdddq\") pod \"community-operators-mw8t4\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.895819 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-catalog-content\") pod \"community-operators-mw8t4\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.998461 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdddq\" (UniqueName: \"kubernetes.io/projected/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-kube-api-access-pdddq\") pod \"community-operators-mw8t4\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.998615 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-catalog-content\") pod \"community-operators-mw8t4\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.998880 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-utilities\") pod \"community-operators-mw8t4\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.999130 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-catalog-content\") pod \"community-operators-mw8t4\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:11 crc kubenswrapper[4803]: I0127 23:02:11.999322 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-utilities\") pod \"community-operators-mw8t4\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:12 crc kubenswrapper[4803]: I0127 23:02:12.026108 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdddq\" (UniqueName: \"kubernetes.io/projected/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-kube-api-access-pdddq\") pod \"community-operators-mw8t4\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:12 crc kubenswrapper[4803]: I0127 23:02:12.114636 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:12 crc kubenswrapper[4803]: I0127 23:02:12.716599 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mw8t4"] Jan 27 23:02:13 crc kubenswrapper[4803]: I0127 23:02:13.291735 4803 generic.go:334] "Generic (PLEG): container finished" podID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" containerID="37623133c353ce7287caae9e1e0f38b8cd5ee221336f38c080e44a500b8a6142" exitCode=0 Jan 27 23:02:13 crc kubenswrapper[4803]: I0127 23:02:13.291791 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mw8t4" event={"ID":"24d4c49b-b4e8-459a-b2e5-f064d9ae172c","Type":"ContainerDied","Data":"37623133c353ce7287caae9e1e0f38b8cd5ee221336f38c080e44a500b8a6142"} Jan 27 23:02:13 crc kubenswrapper[4803]: I0127 23:02:13.291826 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mw8t4" event={"ID":"24d4c49b-b4e8-459a-b2e5-f064d9ae172c","Type":"ContainerStarted","Data":"b9beb51f5e62ccd2d70ada6e82085bb856520f3ea49290fd8ce2199a502e78bc"} Jan 27 23:02:13 crc kubenswrapper[4803]: I0127 23:02:13.294227 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 23:02:15 crc kubenswrapper[4803]: I0127 23:02:15.354653 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mw8t4" event={"ID":"24d4c49b-b4e8-459a-b2e5-f064d9ae172c","Type":"ContainerStarted","Data":"ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82"} Jan 27 23:02:16 crc kubenswrapper[4803]: I0127 23:02:16.365670 4803 generic.go:334] "Generic (PLEG): container finished" podID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" containerID="ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82" exitCode=0 Jan 27 23:02:16 crc kubenswrapper[4803]: I0127 23:02:16.365900 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mw8t4" event={"ID":"24d4c49b-b4e8-459a-b2e5-f064d9ae172c","Type":"ContainerDied","Data":"ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82"} Jan 27 23:02:17 crc kubenswrapper[4803]: I0127 23:02:17.377026 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mw8t4" event={"ID":"24d4c49b-b4e8-459a-b2e5-f064d9ae172c","Type":"ContainerStarted","Data":"346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c"} Jan 27 23:02:17 crc kubenswrapper[4803]: I0127 23:02:17.396614 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mw8t4" podStartSLOduration=2.835385297 podStartE2EDuration="6.396597045s" podCreationTimestamp="2026-01-27 23:02:11 +0000 UTC" firstStartedPulling="2026-01-27 23:02:13.293945086 +0000 UTC m=+4485.709966785" lastFinishedPulling="2026-01-27 23:02:16.855156834 +0000 UTC m=+4489.271178533" observedRunningTime="2026-01-27 23:02:17.393626624 +0000 UTC m=+4489.809648323" watchObservedRunningTime="2026-01-27 23:02:17.396597045 +0000 UTC m=+4489.812618744" Jan 27 23:02:21 crc kubenswrapper[4803]: I0127 23:02:21.307091 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:02:21 crc kubenswrapper[4803]: E0127 23:02:21.308174 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:02:22 crc kubenswrapper[4803]: I0127 23:02:22.115254 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:22 crc kubenswrapper[4803]: I0127 23:02:22.115350 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:22 crc kubenswrapper[4803]: I0127 23:02:22.160972 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:22 crc kubenswrapper[4803]: I0127 23:02:22.491490 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:22 crc kubenswrapper[4803]: I0127 23:02:22.541158 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mw8t4"] Jan 27 23:02:24 crc kubenswrapper[4803]: I0127 23:02:24.454453 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mw8t4" podUID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" containerName="registry-server" containerID="cri-o://346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c" gracePeriod=2 Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.462882 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.464801 4803 generic.go:334] "Generic (PLEG): container finished" podID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" containerID="346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c" exitCode=0 Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.464861 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mw8t4" event={"ID":"24d4c49b-b4e8-459a-b2e5-f064d9ae172c","Type":"ContainerDied","Data":"346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c"} Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.464900 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mw8t4" event={"ID":"24d4c49b-b4e8-459a-b2e5-f064d9ae172c","Type":"ContainerDied","Data":"b9beb51f5e62ccd2d70ada6e82085bb856520f3ea49290fd8ce2199a502e78bc"} Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.464920 4803 scope.go:117] "RemoveContainer" containerID="346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.489413 4803 scope.go:117] "RemoveContainer" containerID="ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.517707 4803 scope.go:117] "RemoveContainer" containerID="37623133c353ce7287caae9e1e0f38b8cd5ee221336f38c080e44a500b8a6142" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.563987 4803 scope.go:117] "RemoveContainer" containerID="346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c" Jan 27 23:02:25 crc kubenswrapper[4803]: E0127 23:02:25.564506 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c\": container with ID starting with 346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c not found: ID does not exist" containerID="346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.564560 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c"} err="failed to get container status \"346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c\": rpc error: code = NotFound desc = could not find container \"346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c\": container with ID starting with 346856d77c32dd374d37716eecea90e72c9b7aacb272f3d312f173ec1c7c257c not found: ID does not exist" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.564597 4803 scope.go:117] "RemoveContainer" containerID="ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82" Jan 27 23:02:25 crc kubenswrapper[4803]: E0127 23:02:25.564947 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82\": container with ID starting with ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82 not found: ID does not exist" containerID="ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.565033 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82"} err="failed to get container status \"ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82\": rpc error: code = NotFound desc = could not find container \"ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82\": container with ID starting with ce717c10879d56cd5a64db19caea2d3918f4f778074a8187f6f19352c4e37a82 not found: ID does not exist" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.565059 4803 scope.go:117] "RemoveContainer" containerID="37623133c353ce7287caae9e1e0f38b8cd5ee221336f38c080e44a500b8a6142" Jan 27 23:02:25 crc kubenswrapper[4803]: E0127 23:02:25.565546 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37623133c353ce7287caae9e1e0f38b8cd5ee221336f38c080e44a500b8a6142\": container with ID starting with 37623133c353ce7287caae9e1e0f38b8cd5ee221336f38c080e44a500b8a6142 not found: ID does not exist" containerID="37623133c353ce7287caae9e1e0f38b8cd5ee221336f38c080e44a500b8a6142" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.565578 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37623133c353ce7287caae9e1e0f38b8cd5ee221336f38c080e44a500b8a6142"} err="failed to get container status \"37623133c353ce7287caae9e1e0f38b8cd5ee221336f38c080e44a500b8a6142\": rpc error: code = NotFound desc = could not find container \"37623133c353ce7287caae9e1e0f38b8cd5ee221336f38c080e44a500b8a6142\": container with ID starting with 37623133c353ce7287caae9e1e0f38b8cd5ee221336f38c080e44a500b8a6142 not found: ID does not exist" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.639733 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdddq\" (UniqueName: \"kubernetes.io/projected/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-kube-api-access-pdddq\") pod \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.639787 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-utilities\") pod \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.639901 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-catalog-content\") pod \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\" (UID: \"24d4c49b-b4e8-459a-b2e5-f064d9ae172c\") " Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.640813 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-utilities" (OuterVolumeSpecName: "utilities") pod "24d4c49b-b4e8-459a-b2e5-f064d9ae172c" (UID: "24d4c49b-b4e8-459a-b2e5-f064d9ae172c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.645395 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-kube-api-access-pdddq" (OuterVolumeSpecName: "kube-api-access-pdddq") pod "24d4c49b-b4e8-459a-b2e5-f064d9ae172c" (UID: "24d4c49b-b4e8-459a-b2e5-f064d9ae172c"). InnerVolumeSpecName "kube-api-access-pdddq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.687337 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24d4c49b-b4e8-459a-b2e5-f064d9ae172c" (UID: "24d4c49b-b4e8-459a-b2e5-f064d9ae172c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.743111 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdddq\" (UniqueName: \"kubernetes.io/projected/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-kube-api-access-pdddq\") on node \"crc\" DevicePath \"\"" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.743352 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 23:02:25 crc kubenswrapper[4803]: I0127 23:02:25.743466 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d4c49b-b4e8-459a-b2e5-f064d9ae172c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 23:02:26 crc kubenswrapper[4803]: I0127 23:02:26.475228 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mw8t4" Jan 27 23:02:26 crc kubenswrapper[4803]: I0127 23:02:26.501612 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mw8t4"] Jan 27 23:02:26 crc kubenswrapper[4803]: I0127 23:02:26.512163 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mw8t4"] Jan 27 23:02:28 crc kubenswrapper[4803]: I0127 23:02:28.334062 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" path="/var/lib/kubelet/pods/24d4c49b-b4e8-459a-b2e5-f064d9ae172c/volumes" Jan 27 23:02:33 crc kubenswrapper[4803]: I0127 23:02:33.308086 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:02:33 crc kubenswrapper[4803]: E0127 23:02:33.308959 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:02:44 crc kubenswrapper[4803]: I0127 23:02:44.307459 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:02:44 crc kubenswrapper[4803]: E0127 23:02:44.308441 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:02:58 crc kubenswrapper[4803]: I0127 23:02:58.322502 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:02:58 crc kubenswrapper[4803]: E0127 23:02:58.323312 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:03:10 crc kubenswrapper[4803]: I0127 23:03:10.308451 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:03:10 crc kubenswrapper[4803]: E0127 23:03:10.309808 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:03:21 crc kubenswrapper[4803]: I0127 23:03:21.307419 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:03:21 crc kubenswrapper[4803]: E0127 23:03:21.308336 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:03:33 crc kubenswrapper[4803]: I0127 23:03:33.306644 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:03:33 crc kubenswrapper[4803]: E0127 23:03:33.307457 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.727657 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 23:03:43 crc kubenswrapper[4803]: E0127 23:03:43.728760 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" containerName="registry-server" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.728776 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" containerName="registry-server" Jan 27 23:03:43 crc kubenswrapper[4803]: E0127 23:03:43.728809 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" containerName="extract-content" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.728818 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" containerName="extract-content" Jan 27 23:03:43 crc kubenswrapper[4803]: E0127 23:03:43.728866 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" containerName="extract-utilities" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.728874 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" containerName="extract-utilities" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.729183 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d4c49b-b4e8-459a-b2e5-f064d9ae172c" containerName="registry-server" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.731074 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.736754 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-r2wvq" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.737155 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.737474 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.737754 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.744130 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.883182 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrj99\" (UniqueName: \"kubernetes.io/projected/9af7a299-6a76-452c-854d-d80a082dabf1-kube-api-access-vrj99\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.883468 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.883548 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.883672 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.883711 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.883741 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.883776 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.883805 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-config-data\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.883935 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.986315 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.986464 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrj99\" (UniqueName: \"kubernetes.io/projected/9af7a299-6a76-452c-854d-d80a082dabf1-kube-api-access-vrj99\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.986584 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.986616 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.986676 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.986702 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.986723 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.986743 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.986760 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-config-data\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.987486 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.987522 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.988819 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.990016 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-config-data\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.990359 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.992803 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.993282 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:43 crc kubenswrapper[4803]: I0127 23:03:43.997247 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:44 crc kubenswrapper[4803]: I0127 23:03:44.017534 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrj99\" (UniqueName: \"kubernetes.io/projected/9af7a299-6a76-452c-854d-d80a082dabf1-kube-api-access-vrj99\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:44 crc kubenswrapper[4803]: I0127 23:03:44.035169 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " pod="openstack/tempest-tests-tempest" Jan 27 23:03:44 crc kubenswrapper[4803]: I0127 23:03:44.067275 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 23:03:44 crc kubenswrapper[4803]: I0127 23:03:44.307606 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:03:44 crc kubenswrapper[4803]: E0127 23:03:44.308346 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:03:44 crc kubenswrapper[4803]: I0127 23:03:44.574477 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 23:03:44 crc kubenswrapper[4803]: W0127 23:03:44.578695 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af7a299_6a76_452c_854d_d80a082dabf1.slice/crio-43b61e02b905b7462659a3f6743a8b5efa0aeeeac6cca4330c9659187d460e0d WatchSource:0}: Error finding container 43b61e02b905b7462659a3f6743a8b5efa0aeeeac6cca4330c9659187d460e0d: Status 404 returned error can't find the container with id 43b61e02b905b7462659a3f6743a8b5efa0aeeeac6cca4330c9659187d460e0d Jan 27 23:03:45 crc kubenswrapper[4803]: I0127 23:03:45.319433 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9af7a299-6a76-452c-854d-d80a082dabf1","Type":"ContainerStarted","Data":"43b61e02b905b7462659a3f6743a8b5efa0aeeeac6cca4330c9659187d460e0d"} Jan 27 23:03:57 crc kubenswrapper[4803]: I0127 23:03:57.307054 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:04:08 crc kubenswrapper[4803]: I0127 23:04:08.246757 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-54c764888c-dpmfw" podUID="912aaad5-2b5b-431b-821f-0ba813a0faaf" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 23:04:15 crc kubenswrapper[4803]: E0127 23:04:15.899250 4803 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 27 23:04:15 crc kubenswrapper[4803]: E0127 23:04:15.903504 4803 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrj99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(9af7a299-6a76-452c-854d-d80a082dabf1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 23:04:15 crc kubenswrapper[4803]: E0127 23:04:15.904621 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="9af7a299-6a76-452c-854d-d80a082dabf1" Jan 27 23:04:16 crc kubenswrapper[4803]: I0127 23:04:16.690532 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"e195e4590bf4eb00374d7f4aa7585484d9570421738b754585197e9eadc6e0e7"} Jan 27 23:04:16 crc kubenswrapper[4803]: E0127 23:04:16.692602 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="9af7a299-6a76-452c-854d-d80a082dabf1" Jan 27 23:04:31 crc kubenswrapper[4803]: I0127 23:04:31.749636 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 23:04:33 crc kubenswrapper[4803]: I0127 23:04:33.876201 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9af7a299-6a76-452c-854d-d80a082dabf1","Type":"ContainerStarted","Data":"568fdfc6d7ee210678a5bb46f952c124af7b6c37d3b707be49cf4faee7e1f065"} Jan 27 23:04:33 crc kubenswrapper[4803]: I0127 23:04:33.900293 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.737696101 podStartE2EDuration="51.900266497s" podCreationTimestamp="2026-01-27 23:03:42 +0000 UTC" firstStartedPulling="2026-01-27 23:03:44.583734869 +0000 UTC m=+4576.999756568" lastFinishedPulling="2026-01-27 23:04:31.746305255 +0000 UTC m=+4624.162326964" observedRunningTime="2026-01-27 23:04:33.895526169 +0000 UTC m=+4626.311547868" watchObservedRunningTime="2026-01-27 23:04:33.900266497 +0000 UTC m=+4626.316288206" Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.417306 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cf75z"] Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.433537 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.524961 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf75z"] Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.541828 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdjsp\" (UniqueName: \"kubernetes.io/projected/a706c043-867c-41fe-b910-d992fded9161-kube-api-access-wdjsp\") pod \"redhat-marketplace-cf75z\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.542176 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-catalog-content\") pod \"redhat-marketplace-cf75z\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.542656 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-utilities\") pod \"redhat-marketplace-cf75z\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.645310 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-utilities\") pod \"redhat-marketplace-cf75z\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.645421 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdjsp\" (UniqueName: \"kubernetes.io/projected/a706c043-867c-41fe-b910-d992fded9161-kube-api-access-wdjsp\") pod \"redhat-marketplace-cf75z\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.645510 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-catalog-content\") pod \"redhat-marketplace-cf75z\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.649577 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-utilities\") pod \"redhat-marketplace-cf75z\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.650332 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-catalog-content\") pod \"redhat-marketplace-cf75z\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.680471 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdjsp\" (UniqueName: \"kubernetes.io/projected/a706c043-867c-41fe-b910-d992fded9161-kube-api-access-wdjsp\") pod \"redhat-marketplace-cf75z\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:23 crc kubenswrapper[4803]: I0127 23:05:23.801590 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:25 crc kubenswrapper[4803]: I0127 23:05:25.124252 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf75z"] Jan 27 23:05:25 crc kubenswrapper[4803]: I0127 23:05:25.473997 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf75z" event={"ID":"a706c043-867c-41fe-b910-d992fded9161","Type":"ContainerStarted","Data":"d1961821801da325e8537b5c57f29bd3cc57a37002dbb244500084870f9b1e69"} Jan 27 23:05:25 crc kubenswrapper[4803]: I0127 23:05:25.474355 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf75z" event={"ID":"a706c043-867c-41fe-b910-d992fded9161","Type":"ContainerStarted","Data":"2fd19df9983ba2d568293939f8ac01ffdf8609782435938411771e7a0dd3d8fd"} Jan 27 23:05:26 crc kubenswrapper[4803]: I0127 23:05:26.487470 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf75z" event={"ID":"a706c043-867c-41fe-b910-d992fded9161","Type":"ContainerDied","Data":"d1961821801da325e8537b5c57f29bd3cc57a37002dbb244500084870f9b1e69"} Jan 27 23:05:26 crc kubenswrapper[4803]: I0127 23:05:26.488228 4803 generic.go:334] "Generic (PLEG): container finished" podID="a706c043-867c-41fe-b910-d992fded9161" containerID="d1961821801da325e8537b5c57f29bd3cc57a37002dbb244500084870f9b1e69" exitCode=0 Jan 27 23:05:27 crc kubenswrapper[4803]: I0127 23:05:27.501368 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf75z" event={"ID":"a706c043-867c-41fe-b910-d992fded9161","Type":"ContainerStarted","Data":"2bf170079387dd705aa5109564ead7de286d574008baee05de6fb068d3ee2a3d"} Jan 27 23:05:29 crc kubenswrapper[4803]: I0127 23:05:29.531014 4803 generic.go:334] "Generic (PLEG): container finished" podID="a706c043-867c-41fe-b910-d992fded9161" containerID="2bf170079387dd705aa5109564ead7de286d574008baee05de6fb068d3ee2a3d" exitCode=0 Jan 27 23:05:29 crc kubenswrapper[4803]: I0127 23:05:29.531121 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf75z" event={"ID":"a706c043-867c-41fe-b910-d992fded9161","Type":"ContainerDied","Data":"2bf170079387dd705aa5109564ead7de286d574008baee05de6fb068d3ee2a3d"} Jan 27 23:05:30 crc kubenswrapper[4803]: I0127 23:05:30.543350 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf75z" event={"ID":"a706c043-867c-41fe-b910-d992fded9161","Type":"ContainerStarted","Data":"a3e82c77e5a8adfc7588f014019b07249777f140b0f7d3a7192dac997f0153e7"} Jan 27 23:05:30 crc kubenswrapper[4803]: I0127 23:05:30.565301 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cf75z" podStartSLOduration=4.096769407 podStartE2EDuration="7.562896528s" podCreationTimestamp="2026-01-27 23:05:23 +0000 UTC" firstStartedPulling="2026-01-27 23:05:26.492214302 +0000 UTC m=+4678.908236001" lastFinishedPulling="2026-01-27 23:05:29.958341423 +0000 UTC m=+4682.374363122" observedRunningTime="2026-01-27 23:05:30.559112086 +0000 UTC m=+4682.975133775" watchObservedRunningTime="2026-01-27 23:05:30.562896528 +0000 UTC m=+4682.978918227" Jan 27 23:05:33 crc kubenswrapper[4803]: I0127 23:05:33.802390 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:33 crc kubenswrapper[4803]: I0127 23:05:33.802982 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:34 crc kubenswrapper[4803]: I0127 23:05:34.862437 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cf75z" podUID="a706c043-867c-41fe-b910-d992fded9161" containerName="registry-server" probeResult="failure" output=< Jan 27 23:05:34 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:05:34 crc kubenswrapper[4803]: > Jan 27 23:05:44 crc kubenswrapper[4803]: I0127 23:05:44.906303 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cf75z" podUID="a706c043-867c-41fe-b910-d992fded9161" containerName="registry-server" probeResult="failure" output=< Jan 27 23:05:44 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:05:44 crc kubenswrapper[4803]: > Jan 27 23:05:54 crc kubenswrapper[4803]: I0127 23:05:54.125126 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:54 crc kubenswrapper[4803]: I0127 23:05:54.180510 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:05:54 crc kubenswrapper[4803]: I0127 23:05:54.375072 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf75z"] Jan 27 23:05:55 crc kubenswrapper[4803]: I0127 23:05:55.245156 4803 patch_prober.go:28] interesting pod/monitoring-plugin-8d685d9cc-c64j5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:05:55 crc kubenswrapper[4803]: I0127 23:05:55.249578 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" podUID="354a68b0-46f4-4cae-afbe-c5ef5fba4bdf" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:55 crc kubenswrapper[4803]: I0127 23:05:55.480225 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" podUID="5bedb1c3-9c5a-4137-851d-33b1723a3221" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:55 crc kubenswrapper[4803]: I0127 23:05:55.480381 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" podUID="5bedb1c3-9c5a-4137-851d-33b1723a3221" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:55 crc kubenswrapper[4803]: I0127 23:05:55.817863 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="6c78b382-5735-4741-b087-cefda68053f4" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:05:55 crc kubenswrapper[4803]: I0127 23:05:55.817865 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="6c78b382-5735-4741-b087-cefda68053f4" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:05:55 crc kubenswrapper[4803]: I0127 23:05:55.877077 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" podUID="62a498d3-45eb-4117-ba22-041e8d90762d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:55 crc kubenswrapper[4803]: I0127 23:05:55.877246 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" podUID="62a498d3-45eb-4117-ba22-041e8d90762d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:55 crc kubenswrapper[4803]: I0127 23:05:55.887979 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cf75z" podUID="a706c043-867c-41fe-b910-d992fded9161" containerName="registry-server" containerID="cri-o://a3e82c77e5a8adfc7588f014019b07249777f140b0f7d3a7192dac997f0153e7" gracePeriod=2 Jan 27 23:05:56 crc kubenswrapper[4803]: I0127 23:05:56.057154 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" podUID="021b5278-1b81-43b3-ae44-ec231fb77687" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.46:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:56 crc kubenswrapper[4803]: I0127 23:05:56.057290 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" podUID="021b5278-1b81-43b3-ae44-ec231fb77687" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.46:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:56 crc kubenswrapper[4803]: I0127 23:05:56.103288 4803 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:05:56 crc kubenswrapper[4803]: I0127 23:05:56.103368 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:56 crc kubenswrapper[4803]: I0127 23:05:56.922149 4803 patch_prober.go:28] interesting pod/controller-manager-7df488d7f-9qs98 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:05:56 crc kubenswrapper[4803]: I0127 23:05:56.922264 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:56 crc kubenswrapper[4803]: I0127 23:05:56.936559 4803 patch_prober.go:28] interesting pod/controller-manager-7df488d7f-9qs98 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:05:56 crc kubenswrapper[4803]: I0127 23:05:56.936620 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:57 crc kubenswrapper[4803]: I0127 23:05:56.986488 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:57 crc kubenswrapper[4803]: I0127 23:05:57.785730 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="4493a984-e728-410f-9362-0795391f2793" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:05:57 crc kubenswrapper[4803]: I0127 23:05:57.786793 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4493a984-e728-410f-9362-0795391f2793" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:05:57 crc kubenswrapper[4803]: I0127 23:05:57.810077 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlj5d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:05:57 crc kubenswrapper[4803]: I0127 23:05:57.810149 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:57 crc kubenswrapper[4803]: I0127 23:05:57.810433 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlj5d container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:05:57 crc kubenswrapper[4803]: I0127 23:05:57.810475 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:57 crc kubenswrapper[4803]: I0127 23:05:57.863547 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf75z" event={"ID":"a706c043-867c-41fe-b910-d992fded9161","Type":"ContainerDied","Data":"a3e82c77e5a8adfc7588f014019b07249777f140b0f7d3a7192dac997f0153e7"} Jan 27 23:05:57 crc kubenswrapper[4803]: I0127 23:05:57.863479 4803 generic.go:334] "Generic (PLEG): container finished" podID="a706c043-867c-41fe-b910-d992fded9161" containerID="a3e82c77e5a8adfc7588f014019b07249777f140b0f7d3a7192dac997f0153e7" exitCode=0 Jan 27 23:05:58 crc kubenswrapper[4803]: I0127 23:05:58.006015 4803 patch_prober.go:28] interesting pod/oauth-openshift-769fc69b77-cp7hp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:05:58 crc kubenswrapper[4803]: I0127 23:05:58.006077 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" podUID="3446baa2-c061-41ff-9652-16734b5bb97a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:58 crc kubenswrapper[4803]: I0127 23:05:58.006304 4803 patch_prober.go:28] interesting pod/oauth-openshift-769fc69b77-cp7hp container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:05:58 crc kubenswrapper[4803]: I0127 23:05:58.006320 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" podUID="3446baa2-c061-41ff-9652-16734b5bb97a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:58 crc kubenswrapper[4803]: I0127 23:05:58.414208 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-tp8d4" podUID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerName="registry-server" probeResult="failure" output=< Jan 27 23:05:58 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:05:58 crc kubenswrapper[4803]: > Jan 27 23:05:58 crc kubenswrapper[4803]: I0127 23:05:58.416170 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-tp8d4" podUID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerName="registry-server" probeResult="failure" output=< Jan 27 23:05:58 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:05:58 crc kubenswrapper[4803]: > Jan 27 23:05:58 crc kubenswrapper[4803]: I0127 23:05:58.695101 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" podUID="e9d93e19-7c2b-4d53-bfe8-7b0157dec931" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:05:58 crc kubenswrapper[4803]: I0127 23:05:58.695134 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" podUID="e9d93e19-7c2b-4d53-bfe8-7b0157dec931" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:00 crc kubenswrapper[4803]: I0127 23:06:00.887337 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:06:00 crc kubenswrapper[4803]: I0127 23:06:00.897214 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf75z" event={"ID":"a706c043-867c-41fe-b910-d992fded9161","Type":"ContainerDied","Data":"2fd19df9983ba2d568293939f8ac01ffdf8609782435938411771e7a0dd3d8fd"} Jan 27 23:06:00 crc kubenswrapper[4803]: I0127 23:06:00.897548 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf75z" Jan 27 23:06:00 crc kubenswrapper[4803]: I0127 23:06:00.902620 4803 scope.go:117] "RemoveContainer" containerID="a3e82c77e5a8adfc7588f014019b07249777f140b0f7d3a7192dac997f0153e7" Jan 27 23:06:00 crc kubenswrapper[4803]: I0127 23:06:00.935622 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-utilities\") pod \"a706c043-867c-41fe-b910-d992fded9161\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " Jan 27 23:06:00 crc kubenswrapper[4803]: I0127 23:06:00.935867 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdjsp\" (UniqueName: \"kubernetes.io/projected/a706c043-867c-41fe-b910-d992fded9161-kube-api-access-wdjsp\") pod \"a706c043-867c-41fe-b910-d992fded9161\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " Jan 27 23:06:00 crc kubenswrapper[4803]: I0127 23:06:00.936020 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-catalog-content\") pod \"a706c043-867c-41fe-b910-d992fded9161\" (UID: \"a706c043-867c-41fe-b910-d992fded9161\") " Jan 27 23:06:00 crc kubenswrapper[4803]: I0127 23:06:00.952097 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-utilities" (OuterVolumeSpecName: "utilities") pod "a706c043-867c-41fe-b910-d992fded9161" (UID: "a706c043-867c-41fe-b910-d992fded9161"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:06:00 crc kubenswrapper[4803]: I0127 23:06:00.986929 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a706c043-867c-41fe-b910-d992fded9161-kube-api-access-wdjsp" (OuterVolumeSpecName: "kube-api-access-wdjsp") pod "a706c043-867c-41fe-b910-d992fded9161" (UID: "a706c043-867c-41fe-b910-d992fded9161"). InnerVolumeSpecName "kube-api-access-wdjsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:06:01 crc kubenswrapper[4803]: I0127 23:06:01.017168 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a706c043-867c-41fe-b910-d992fded9161" (UID: "a706c043-867c-41fe-b910-d992fded9161"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:06:01 crc kubenswrapper[4803]: I0127 23:06:01.018755 4803 scope.go:117] "RemoveContainer" containerID="2bf170079387dd705aa5109564ead7de286d574008baee05de6fb068d3ee2a3d" Jan 27 23:06:01 crc kubenswrapper[4803]: I0127 23:06:01.044526 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 23:06:01 crc kubenswrapper[4803]: I0127 23:06:01.044560 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdjsp\" (UniqueName: \"kubernetes.io/projected/a706c043-867c-41fe-b910-d992fded9161-kube-api-access-wdjsp\") on node \"crc\" DevicePath \"\"" Jan 27 23:06:01 crc kubenswrapper[4803]: I0127 23:06:01.044571 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a706c043-867c-41fe-b910-d992fded9161-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 23:06:01 crc kubenswrapper[4803]: I0127 23:06:01.155505 4803 scope.go:117] "RemoveContainer" containerID="d1961821801da325e8537b5c57f29bd3cc57a37002dbb244500084870f9b1e69" Jan 27 23:06:01 crc kubenswrapper[4803]: I0127 23:06:01.344358 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf75z"] Jan 27 23:06:01 crc kubenswrapper[4803]: I0127 23:06:01.442826 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf75z"] Jan 27 23:06:01 crc kubenswrapper[4803]: E0127 23:06:01.567648 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda706c043_867c_41fe_b910_d992fded9161.slice/crio-2fd19df9983ba2d568293939f8ac01ffdf8609782435938411771e7a0dd3d8fd\": RecentStats: unable to find data in memory cache]" Jan 27 23:06:02 crc kubenswrapper[4803]: I0127 23:06:02.335643 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a706c043-867c-41fe-b910-d992fded9161" path="/var/lib/kubelet/pods/a706c043-867c-41fe-b910-d992fded9161/volumes" Jan 27 23:06:16 crc kubenswrapper[4803]: I0127 23:06:16.353110 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:06:16 crc kubenswrapper[4803]: I0127 23:06:16.367505 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.193908 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-74nng"] Jan 27 23:06:30 crc kubenswrapper[4803]: E0127 23:06:30.216310 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a706c043-867c-41fe-b910-d992fded9161" containerName="extract-content" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.216618 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a706c043-867c-41fe-b910-d992fded9161" containerName="extract-content" Jan 27 23:06:30 crc kubenswrapper[4803]: E0127 23:06:30.217743 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a706c043-867c-41fe-b910-d992fded9161" containerName="extract-utilities" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.217759 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a706c043-867c-41fe-b910-d992fded9161" containerName="extract-utilities" Jan 27 23:06:30 crc kubenswrapper[4803]: E0127 23:06:30.217810 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a706c043-867c-41fe-b910-d992fded9161" containerName="registry-server" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.217833 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a706c043-867c-41fe-b910-d992fded9161" containerName="registry-server" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.219959 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a706c043-867c-41fe-b910-d992fded9161" containerName="registry-server" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.238105 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.395573 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-catalog-content\") pod \"certified-operators-74nng\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.395756 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbzs8\" (UniqueName: \"kubernetes.io/projected/654b6723-6b6d-41ac-92fe-f097f87735a4-kube-api-access-pbzs8\") pod \"certified-operators-74nng\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.395915 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-utilities\") pod \"certified-operators-74nng\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.461749 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74nng"] Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.498693 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbzs8\" (UniqueName: \"kubernetes.io/projected/654b6723-6b6d-41ac-92fe-f097f87735a4-kube-api-access-pbzs8\") pod \"certified-operators-74nng\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.498765 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-utilities\") pod \"certified-operators-74nng\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.498982 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-catalog-content\") pod \"certified-operators-74nng\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.513622 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-utilities\") pod \"certified-operators-74nng\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.513628 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-catalog-content\") pod \"certified-operators-74nng\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.562230 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbzs8\" (UniqueName: \"kubernetes.io/projected/654b6723-6b6d-41ac-92fe-f097f87735a4-kube-api-access-pbzs8\") pod \"certified-operators-74nng\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:30 crc kubenswrapper[4803]: I0127 23:06:30.577483 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:32 crc kubenswrapper[4803]: I0127 23:06:32.448884 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74nng"] Jan 27 23:06:33 crc kubenswrapper[4803]: E0127 23:06:33.195698 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod654b6723_6b6d_41ac_92fe_f097f87735a4.slice/crio-conmon-0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d.scope\": RecentStats: unable to find data in memory cache]" Jan 27 23:06:33 crc kubenswrapper[4803]: I0127 23:06:33.253706 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74nng" event={"ID":"654b6723-6b6d-41ac-92fe-f097f87735a4","Type":"ContainerDied","Data":"0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d"} Jan 27 23:06:33 crc kubenswrapper[4803]: I0127 23:06:33.254821 4803 generic.go:334] "Generic (PLEG): container finished" podID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerID="0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d" exitCode=0 Jan 27 23:06:33 crc kubenswrapper[4803]: I0127 23:06:33.255311 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74nng" event={"ID":"654b6723-6b6d-41ac-92fe-f097f87735a4","Type":"ContainerStarted","Data":"3a66ddee2d12b2109090716d81a7a83113e8f28f5ed77a583a8635e38f686d77"} Jan 27 23:06:35 crc kubenswrapper[4803]: I0127 23:06:35.277479 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74nng" event={"ID":"654b6723-6b6d-41ac-92fe-f097f87735a4","Type":"ContainerStarted","Data":"a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848"} Jan 27 23:06:40 crc kubenswrapper[4803]: I0127 23:06:40.921440 4803 trace.go:236] Trace[322451684]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (27-Jan-2026 23:06:39.587) (total time: 1321ms): Jan 27 23:06:40 crc kubenswrapper[4803]: Trace[322451684]: [1.32188904s] [1.32188904s] END Jan 27 23:06:41 crc kubenswrapper[4803]: I0127 23:06:41.366431 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74nng" event={"ID":"654b6723-6b6d-41ac-92fe-f097f87735a4","Type":"ContainerDied","Data":"a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848"} Jan 27 23:06:41 crc kubenswrapper[4803]: I0127 23:06:41.370461 4803 generic.go:334] "Generic (PLEG): container finished" podID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerID="a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848" exitCode=0 Jan 27 23:06:41 crc kubenswrapper[4803]: I0127 23:06:41.924153 4803 patch_prober.go:28] interesting pod/route-controller-manager-c4b5fc665-k52v8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:41 crc kubenswrapper[4803]: I0127 23:06:41.925716 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podUID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.393775 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" podUID="1f1cd413-71e0-443e-95cf-e5d46a745b1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.393886 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" podUID="1f1cd413-71e0-443e-95cf-e5d46a745b1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.429213 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74nng" event={"ID":"654b6723-6b6d-41ac-92fe-f097f87735a4","Type":"ContainerStarted","Data":"a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f"} Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.505153 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" podUID="c46ecfda-be7b-4f42-9874-a8a94f71188f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.541827 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-74nng" podStartSLOduration=5.718289002 podStartE2EDuration="14.53900624s" podCreationTimestamp="2026-01-27 23:06:29 +0000 UTC" firstStartedPulling="2026-01-27 23:06:33.257690425 +0000 UTC m=+4745.673712124" lastFinishedPulling="2026-01-27 23:06:42.078407663 +0000 UTC m=+4754.494429362" observedRunningTime="2026-01-27 23:06:43.532962847 +0000 UTC m=+4755.948984556" watchObservedRunningTime="2026-01-27 23:06:43.53900624 +0000 UTC m=+4755.955027939" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.587001 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.587059 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.587082 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" podUID="c46ecfda-be7b-4f42-9874-a8a94f71188f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.604871 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" podUID="7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.654045 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.654261 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.654135 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" podUID="7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.772062 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" podUID="35742b16-a222-4602-ae0a-d078eafb1ea1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.772130 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" podUID="35742b16-a222-4602-ae0a-d078eafb1ea1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.855012 4803 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-nfxjq container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.37:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.855076 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" podUID="5b3c1908-cc42-4af3-a73d-916466d38dd6" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.855283 4803 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-nfxjq container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:43 crc kubenswrapper[4803]: I0127 23:06:43.855341 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" podUID="5b3c1908-cc42-4af3-a73d-916466d38dd6" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:44 crc kubenswrapper[4803]: I0127 23:06:44.097998 4803 trace.go:236] Trace[361064688]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (27-Jan-2026 23:06:42.826) (total time: 1264ms): Jan 27 23:06:44 crc kubenswrapper[4803]: Trace[361064688]: [1.264265862s] [1.264265862s] END Jan 27 23:06:44 crc kubenswrapper[4803]: I0127 23:06:44.201087 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" podUID="e163066d-c764-49e0-9119-cbeb4f4fe50b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:44 crc kubenswrapper[4803]: I0127 23:06:44.201102 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" podUID="e163066d-c764-49e0-9119-cbeb4f4fe50b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:44 crc kubenswrapper[4803]: I0127 23:06:44.788205 4803 patch_prober.go:28] interesting pod/metrics-server-5dc8cc774c-42hcg container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:44 crc kubenswrapper[4803]: I0127 23:06:44.788550 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" podUID="f978ff10-12ad-4883-98d9-7ce831fad147" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.241111 4803 patch_prober.go:28] interesting pod/monitoring-plugin-8d685d9cc-c64j5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.241181 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" podUID="354a68b0-46f4-4cae-afbe-c5ef5fba4bdf" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.253925 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.254039 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.283047 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.283079 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" podUID="2beb4659-d63e-495f-a32f-f94cbcbbc1ce" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.283108 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.440192 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" podUID="5bedb1c3-9c5a-4137-851d-33b1723a3221" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.561476 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.561550 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.639036 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.639078 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" podUID="038e0b5a-3e3b-462b-83ca-c9865b6f4240" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.639076 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" podUID="038e0b5a-3e3b-462b-83ca-c9865b6f4240" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.639129 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.786516 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="6c78b382-5735-4741-b087-cefda68053f4" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.786517 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="6c78b382-5735-4741-b087-cefda68053f4" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:06:45 crc kubenswrapper[4803]: I0127 23:06:45.836049 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" podUID="62a498d3-45eb-4117-ba22-041e8d90762d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.102931 4803 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.103012 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.343376 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.344035 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.453141 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" podUID="ceff729d-b83b-45b4-99ef-d11ef9570efb" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.453165 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" podUID="ceff729d-b83b-45b4-99ef-d11ef9570efb" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.534175 4803 patch_prober.go:28] interesting pod/thanos-querier-7fd45b674-f8ngk container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.534242 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" podUID="f118d287-ae55-421d-9b9a-050b79b6692b" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.789975 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.906536 4803 patch_prober.go:28] interesting pod/controller-manager-7df488d7f-9qs98 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.906612 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.906725 4803 patch_prober.go:28] interesting pod/controller-manager-7df488d7f-9qs98 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:46 crc kubenswrapper[4803]: I0127 23:06:46.906775 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.065108 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.065232 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.065739 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.715757 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="f9122f89-a56c-47d7-ad05-9aab6acdcc2f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.715864 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="f9122f89-a56c-47d7-ad05-9aab6acdcc2f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.785558 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4493a984-e728-410f-9362-0795391f2793" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.786493 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="4493a984-e728-410f-9362-0795391f2793" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.810212 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlj5d container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.810279 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.810535 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlj5d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.810599 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.906877 4803 patch_prober.go:28] interesting pod/oauth-openshift-769fc69b77-cp7hp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.906923 4803 patch_prober.go:28] interesting pod/oauth-openshift-769fc69b77-cp7hp container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.906986 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" podUID="3446baa2-c061-41ff-9652-16734b5bb97a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:47 crc kubenswrapper[4803]: I0127 23:06:47.906931 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" podUID="3446baa2-c061-41ff-9652-16734b5bb97a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:48 crc kubenswrapper[4803]: I0127 23:06:48.732072 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" podUID="e9d93e19-7c2b-4d53-bfe8-7b0157dec931" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:48 crc kubenswrapper[4803]: I0127 23:06:48.785634 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:06:48 crc kubenswrapper[4803]: I0127 23:06:48.785648 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.182066 4803 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.182145 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.438234 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-tp8d4" podUID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerName="registry-server" probeResult="failure" output=< Jan 27 23:06:49 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:06:49 crc kubenswrapper[4803]: > Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.438553 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-tp8d4" podUID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerName="registry-server" probeResult="failure" output=< Jan 27 23:06:49 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:06:49 crc kubenswrapper[4803]: > Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.542675 4803 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-zr5dw container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.542770 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" podUID="dea15eec-6442-4acb-b40a-418dddb46623" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.768688 4803 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-q4xmw container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.769132 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" podUID="1e455314-8336-4d0e-a611-044952db08e7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.786483 4803 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-bs4dm container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.786543 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" podUID="0323234b-6aa2-41ea-bf58-a4b3924d6e4a" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.864222 4803 patch_prober.go:28] interesting pod/console-operator-58897d9998-h9nvv container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.864787 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.864323 4803 patch_prober.go:28] interesting pod/console-operator-58897d9998-h9nvv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.864971 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.999160 4803 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-kdr8w container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:49 crc kubenswrapper[4803]: I0127 23:06:49.999213 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" podUID="8f8b8ad1-f276-4546-afd2-49f338f38c92" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.077136 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.077215 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.077287 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.077205 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.165113 4803 patch_prober.go:28] interesting pod/loki-operator-controller-manager-b65d5f66c-f2bd5 container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.165443 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" podUID="51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.165147 4803 patch_prober.go:28] interesting pod/loki-operator-controller-manager-b65d5f66c-f2bd5 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.165721 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" podUID="51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.253204 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.253243 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.253274 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.253275 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.350261 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.350312 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.350617 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.350633 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.350789 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.350874 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.351138 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.351167 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.357726 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.357759 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.357933 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.357994 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.453064 4803 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-d65kn container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.453084 4803 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-d65kn container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.453128 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" podUID="ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.453152 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" podUID="ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.567144 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.567213 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.583732 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.584525 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.604965 4803 patch_prober.go:28] interesting pod/console-98b9df85f-f5gmm container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.140:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:50 crc kubenswrapper[4803]: I0127 23:06:50.605050 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-98b9df85f-f5gmm" podUID="fa470512-29ae-4707-abdb-a93dd93f6b58" containerName="console" probeResult="failure" output="Get \"https://10.217.0.140:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:51 crc kubenswrapper[4803]: I0127 23:06:51.017167 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" podUID="021b5278-1b81-43b3-ae44-ec231fb77687" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.46:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:51 crc kubenswrapper[4803]: I0127 23:06:51.534572 4803 patch_prober.go:28] interesting pod/thanos-querier-7fd45b674-f8ngk container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:51 crc kubenswrapper[4803]: I0127 23:06:51.534994 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" podUID="f118d287-ae55-421d-9b9a-050b79b6692b" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:51 crc kubenswrapper[4803]: I0127 23:06:51.789441 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 23:06:51 crc kubenswrapper[4803]: I0127 23:06:51.877792 4803 patch_prober.go:28] interesting pod/route-controller-manager-c4b5fc665-k52v8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:51 crc kubenswrapper[4803]: I0127 23:06:51.877873 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podUID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:51 crc kubenswrapper[4803]: I0127 23:06:51.877799 4803 patch_prober.go:28] interesting pod/route-controller-manager-c4b5fc665-k52v8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:51 crc kubenswrapper[4803]: I0127 23:06:51.878202 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podUID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.028573 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.029003 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.028606 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.029112 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.078044 4803 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-bqlpm container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.90:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.078096 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" podUID="77dd058d-f38b-4382-923d-f68fbb3c9566" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.90:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.523569 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.523633 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.523776 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.523827 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.867052 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" podUID="eac7ef2c-904d-429b-ac3f-a43a72339fde" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.920053 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" podUID="c6f78887-1cda-463f-ab3f-57703bfb7a41" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:52 crc kubenswrapper[4803]: I0127 23:06:52.962085 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" podUID="51221b4b-024e-4134-8baa-a9478c8c596a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.003140 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" podUID="f8498dfc-1b67-4783-9389-10d5b30b2860" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.118088 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" podUID="9c6792d4-9d18-4d1c-b855-65aba5ae4919" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.160028 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" podUID="29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.202066 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" podUID="47dce22a-001c-4774-ab99-28cd85420e1c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.392064 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" podUID="662a79ef-9928-408c-8cfb-62945e0b6725" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.392068 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" podUID="1f1cd413-71e0-443e-95cf-e5d46a745b1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.507024 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" podUID="c46ecfda-be7b-4f42-9874-a8a94f71188f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.507348 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" podUID="b6c89c2e-a080-4d20-bc81-bda0f9eb17b6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.549150 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" podUID="7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.632034 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.632050 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.632297 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.632382 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.739069 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" podUID="0592ab2d-4ade-4747-a823-73cd5dcac047" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.739081 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" podUID="35742b16-a222-4602-ae0a-d078eafb1ea1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.780089 4803 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-nfxjq container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.37:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.780447 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" podUID="5b3c1908-cc42-4af3-a73d-916466d38dd6" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.786360 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.786632 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.821190 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" podUID="eae71f44-8628-4436-be64-9ac3aa8f9255" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.863092 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" podUID="9dde9803-1302-4f0f-a353-1313e3696d7b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.904162 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" podUID="7b65a167-f9c8-475c-be5b-39e0502352ab" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:53 crc kubenswrapper[4803]: I0127 23:06:53.946074 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" podUID="57c28f35-52f1-48aa-ad74-3f66a5cdd52c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:54 crc kubenswrapper[4803]: I0127 23:06:54.161144 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" podUID="e163066d-c764-49e0-9119-cbeb4f4fe50b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:54 crc kubenswrapper[4803]: I0127 23:06:54.784926 4803 patch_prober.go:28] interesting pod/metrics-server-5dc8cc774c-42hcg container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:54 crc kubenswrapper[4803]: I0127 23:06:54.785282 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" podUID="f978ff10-12ad-4883-98d9-7ce831fad147" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:54 crc kubenswrapper[4803]: I0127 23:06:54.784984 4803 patch_prober.go:28] interesting pod/metrics-server-5dc8cc774c-42hcg container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:54 crc kubenswrapper[4803]: I0127 23:06:54.785344 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" podUID="f978ff10-12ad-4883-98d9-7ce831fad147" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:54 crc kubenswrapper[4803]: I0127 23:06:54.792576 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-9crs2" podUID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:06:54 crc kubenswrapper[4803]: I0127 23:06:54.793341 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-9crs2" podUID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.027985 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-hg2h2" podUID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerName="registry-server" probeResult="failure" output=< Jan 27 23:06:55 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:06:55 crc kubenswrapper[4803]: > Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.028062 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-9nds5" podUID="f28d4382-79f1-4254-a4fa-fced45178594" containerName="registry-server" probeResult="failure" output=< Jan 27 23:06:55 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:06:55 crc kubenswrapper[4803]: > Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.028061 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-hg2h2" podUID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerName="registry-server" probeResult="failure" output=< Jan 27 23:06:55 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:06:55 crc kubenswrapper[4803]: > Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.028110 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-9nds5" podUID="f28d4382-79f1-4254-a4fa-fced45178594" containerName="registry-server" probeResult="failure" output=< Jan 27 23:06:55 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:06:55 crc kubenswrapper[4803]: > Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.028166 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-cwt95" podUID="1088c904-bd11-410d-963b-91425f9e2ee1" containerName="registry-server" probeResult="failure" output=< Jan 27 23:06:55 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:06:55 crc kubenswrapper[4803]: > Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.028166 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-cwt95" podUID="1088c904-bd11-410d-963b-91425f9e2ee1" containerName="registry-server" probeResult="failure" output=< Jan 27 23:06:55 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:06:55 crc kubenswrapper[4803]: > Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.028704 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-74nng" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="registry-server" probeResult="failure" output=< Jan 27 23:06:55 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:06:55 crc kubenswrapper[4803]: > Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.241571 4803 patch_prober.go:28] interesting pod/monitoring-plugin-8d685d9cc-c64j5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.241722 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" podUID="354a68b0-46f4-4cae-afbe-c5ef5fba4bdf" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.252924 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded" start-of-body= Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.252984 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.283085 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" podUID="2beb4659-d63e-495f-a32f-f94cbcbbc1ce" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.283153 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.283592 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.310439 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="bfd832f4-d1c8-4283-b3cb-55cd225022e4" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.14:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.311406 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="bfd832f4-d1c8-4283-b3cb-55cd225022e4" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.14:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.481066 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" podUID="5bedb1c3-9c5a-4137-851d-33b1723a3221" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.481158 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" podUID="5bedb1c3-9c5a-4137-851d-33b1723a3221" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.533030 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.533087 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.533138 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.533092 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.562324 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.562300 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.562388 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.562424 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.638052 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" podUID="038e0b5a-3e3b-462b-83ca-c9865b6f4240" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.638058 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" podUID="038e0b5a-3e3b-462b-83ca-c9865b6f4240" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.786758 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="6c78b382-5735-4741-b087-cefda68053f4" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.786778 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="6c78b382-5735-4741-b087-cefda68053f4" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.875059 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" podUID="62a498d3-45eb-4117-ba22-041e8d90762d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:55 crc kubenswrapper[4803]: I0127 23:06:55.875259 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" podUID="62a498d3-45eb-4117-ba22-041e8d90762d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:56 crc kubenswrapper[4803]: I0127 23:06:56.103835 4803 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:56 crc kubenswrapper[4803]: I0127 23:06:56.105403 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:56 crc kubenswrapper[4803]: I0127 23:06:56.456147 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" podUID="ceff729d-b83b-45b4-99ef-d11ef9570efb" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:56 crc kubenswrapper[4803]: I0127 23:06:56.456166 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" podUID="ceff729d-b83b-45b4-99ef-d11ef9570efb" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:56 crc kubenswrapper[4803]: I0127 23:06:56.905429 4803 patch_prober.go:28] interesting pod/controller-manager-7df488d7f-9qs98 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:56 crc kubenswrapper[4803]: I0127 23:06:56.905502 4803 patch_prober.go:28] interesting pod/controller-manager-7df488d7f-9qs98 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:56 crc kubenswrapper[4803]: I0127 23:06:56.905575 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:56 crc kubenswrapper[4803]: I0127 23:06:56.905498 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.068087 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.068964 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.069153 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.160374 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-2nc8h" podUID="802fd9e5-a4c1-4195-b95a-e8fde55cbe1c" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.98:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.160680 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-2nc8h" podUID="802fd9e5-a4c1-4195-b95a-e8fde55cbe1c" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.98:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.715124 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="f9122f89-a56c-47d7-ad05-9aab6acdcc2f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.717405 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="f9122f89-a56c-47d7-ad05-9aab6acdcc2f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.769139 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlj5d container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.769227 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.786067 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4493a984-e728-410f-9362-0795391f2793" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.787534 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="4493a984-e728-410f-9362-0795391f2793" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.788182 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wrzxs" podUID="89a353b4-798b-4f55-91ff-316a9840a7bb" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.790112 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.810533 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlj5d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.810837 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.811454 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.820419 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"ec91d42bd8a135d0c614d6ed97e86acfb3222e35f87ebe79744ce38bff5ca16a"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.822041 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-central-agent" containerID="cri-o://ec91d42bd8a135d0c614d6ed97e86acfb3222e35f87ebe79744ce38bff5ca16a" gracePeriod=30 Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.915513 4803 patch_prober.go:28] interesting pod/oauth-openshift-769fc69b77-cp7hp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.915586 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" podUID="3446baa2-c061-41ff-9652-16734b5bb97a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.915588 4803 patch_prober.go:28] interesting pod/oauth-openshift-769fc69b77-cp7hp container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.915700 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" podUID="3446baa2-c061-41ff-9652-16734b5bb97a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:57 crc kubenswrapper[4803]: I0127 23:06:57.987148 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-p9fmz" podUID="669fa453-18c2-4202-9ac3-117b6f000063" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.029042 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-p9fmz" podUID="669fa453-18c2-4202-9ac3-117b6f000063" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.531983 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.532277 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.532319 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.532345 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.532356 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.532390 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.646672 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"cb4ba389c387b989d42589e012b26e5087e092983e020a588397aa541d65796f"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.646735 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" containerID="cri-o://cb4ba389c387b989d42589e012b26e5087e092983e020a588397aa541d65796f" gracePeriod=30 Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.773045 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" podUID="e9d93e19-7c2b-4d53-bfe8-7b0157dec931" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.773084 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" podUID="e9d93e19-7c2b-4d53-bfe8-7b0157dec931" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.788528 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.788539 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:06:58 crc kubenswrapper[4803]: I0127 23:06:58.788666 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.178253 4803 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.178314 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.541290 4803 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-zr5dw container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.541581 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" podUID="dea15eec-6442-4acb-b40a-418dddb46623" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.652703 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.652777 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.768162 4803 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-q4xmw container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.768238 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" podUID="1e455314-8336-4d0e-a611-044952db08e7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.787468 4803 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-bs4dm container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.787581 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" podUID="0323234b-6aa2-41ea-bf58-a4b3924d6e4a" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.840038 4803 patch_prober.go:28] interesting pod/downloads-7954f5f757-9drvm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.840092 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9drvm" podUID="1bc7c7ba-cad8-4f64-836e-a564b254e1fd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.882081 4803 patch_prober.go:28] interesting pod/downloads-7954f5f757-9drvm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.882143 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9drvm" podUID="1bc7c7ba-cad8-4f64-836e-a564b254e1fd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.882202 4803 patch_prober.go:28] interesting pod/console-operator-58897d9998-h9nvv container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.882290 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.882107 4803 patch_prober.go:28] interesting pod/console-operator-58897d9998-h9nvv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:06:59 crc kubenswrapper[4803]: I0127 23:06:59.882418 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.000546 4803 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-kdr8w container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.000608 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" podUID="8f8b8ad1-f276-4546-afd2-49f338f38c92" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.077373 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.077418 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.077428 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.077469 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.123047 4803 patch_prober.go:28] interesting pod/loki-operator-controller-manager-b65d5f66c-f2bd5 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.123112 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" podUID="51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.253622 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.253681 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.253689 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.253743 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.351079 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.351349 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.351612 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.351682 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.351908 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.351971 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.352059 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.352082 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.358956 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.359003 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.358956 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.359109 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.451044 4803 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-d65kn container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.451119 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" podUID="ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.451187 4803 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-d65kn container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.451236 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" podUID="ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.540907 4803 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-zr5dw container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.540962 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" podUID="dea15eec-6442-4acb-b40a-418dddb46623" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.562914 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.563037 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.602504 4803 patch_prober.go:28] interesting pod/console-98b9df85f-f5gmm container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.140:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.602590 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-98b9df85f-f5gmm" podUID="fa470512-29ae-4707-abdb-a93dd93f6b58" containerName="console" probeResult="failure" output="Get \"https://10.217.0.140:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.759937 4803 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.760042 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="564d57a3-4f2a-46a9-928b-b77dc685d903" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.768196 4803 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-q4xmw container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.768393 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" podUID="1e455314-8336-4d0e-a611-044952db08e7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.771498 4803 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.76:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.771586 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="a4c26ad1-a645-4746-9c19-c7bbda04000c" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.76:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.786709 4803 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-bs4dm container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.786763 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" podUID="0323234b-6aa2-41ea-bf58-a4b3924d6e4a" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.790210 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-tp8d4" podUID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.791270 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-tp8d4" podUID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.825902 4803 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.81:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:00 crc kubenswrapper[4803]: I0127 23:07:00.825989 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="6efa3b11-b2ea-4f6d-87d2-177229718026" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.81:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.016084 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" podUID="021b5278-1b81-43b3-ae44-ec231fb77687" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.46:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.253111 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.253177 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.421886 4803 trace.go:236] Trace[479626193]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (27-Jan-2026 23:06:57.722) (total time: 3696ms): Jan 27 23:07:01 crc kubenswrapper[4803]: Trace[479626193]: [3.696067582s] [3.696067582s] END Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.421886 4803 trace.go:236] Trace[2097368272]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (27-Jan-2026 23:06:55.058) (total time: 6360ms): Jan 27 23:07:01 crc kubenswrapper[4803]: Trace[2097368272]: [6.360001955s] [6.360001955s] END Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.421888 4803 trace.go:236] Trace[6644808]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (27-Jan-2026 23:06:55.823) (total time: 5594ms): Jan 27 23:07:01 crc kubenswrapper[4803]: Trace[6644808]: [5.594806928s] [5.594806928s] END Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.421888 4803 trace.go:236] Trace[1705875567]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (27-Jan-2026 23:06:58.319) (total time: 3099ms): Jan 27 23:07:01 crc kubenswrapper[4803]: Trace[1705875567]: [3.099170433s] [3.099170433s] END Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.421887 4803 trace.go:236] Trace[1060577470]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (27-Jan-2026 23:06:54.429) (total time: 6989ms): Jan 27 23:07:01 crc kubenswrapper[4803]: Trace[1060577470]: [6.989365841s] [6.989365841s] END Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.496768 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.562948 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.58:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.563009 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.760017 4803 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.760405 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-ingester-0" podUID="564d57a3-4f2a-46a9-928b-b77dc685d903" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.771161 4803 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.76:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.771204 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-compactor-0" podUID="a4c26ad1-a645-4746-9c19-c7bbda04000c" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.76:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.788308 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.788960 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.825826 4803 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.81:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.825946 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="6efa3b11-b2ea-4f6d-87d2-177229718026" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.81:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.877922 4803 patch_prober.go:28] interesting pod/route-controller-manager-c4b5fc665-k52v8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.878029 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podUID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.879424 4803 patch_prober.go:28] interesting pod/route-controller-manager-c4b5fc665-k52v8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:01 crc kubenswrapper[4803]: I0127 23:07:01.879493 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podUID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.027864 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.028300 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.027892 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.028503 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.077770 4803 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-bqlpm container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.90:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.077827 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" podUID="77dd058d-f38b-4382-923d-f68fbb3c9566" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.90:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.312178 4803 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.312246 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.715200 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="f9122f89-a56c-47d7-ad05-9aab6acdcc2f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.715225 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="f9122f89-a56c-47d7-ad05-9aab6acdcc2f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.786318 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wrzxs" podUID="89a353b4-798b-4f55-91ff-316a9840a7bb" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.787666 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.908091 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" podUID="eac7ef2c-904d-429b-ac3f-a43a72339fde" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:02 crc kubenswrapper[4803]: I0127 23:07:02.908507 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" podUID="eac7ef2c-904d-429b-ac3f-a43a72339fde" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.072036 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" podUID="51221b4b-024e-4134-8baa-a9478c8c596a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.072415 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" podUID="c6f78887-1cda-463f-ab3f-57703bfb7a41" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.154056 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" podUID="c6f78887-1cda-463f-ab3f-57703bfb7a41" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.154094 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" podUID="f8498dfc-1b67-4783-9389-10d5b30b2860" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.154188 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" podUID="51221b4b-024e-4134-8baa-a9478c8c596a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.178342 4803 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-qn26k container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.62:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.178401 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" podUID="c5087ca2-7fa8-4a3e-b1bb-25335a4ed927" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.62:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.178647 4803 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-qn26k container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.62:5000/healthz\": context deadline exceeded" start-of-body= Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.178684 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" podUID="c5087ca2-7fa8-4a3e-b1bb-25335a4ed927" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.62:5000/healthz\": context deadline exceeded" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.238138 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" podUID="9c6792d4-9d18-4d1c-b855-65aba5ae4919" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.238234 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" podUID="f8498dfc-1b67-4783-9389-10d5b30b2860" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.320382 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" podUID="29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.416383 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" podUID="9c6792d4-9d18-4d1c-b855-65aba5ae4919" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.421031 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" podUID="29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.423402 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" podUID="47dce22a-001c-4774-ab99-28cd85420e1c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.567130 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" podUID="35783fb5-ef1c-4b33-beb1-af9fee8512d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.649247 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" podUID="47dce22a-001c-4774-ab99-28cd85420e1c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.649279 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" podUID="662a79ef-9928-408c-8cfb-62945e0b6725" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.731367 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" podUID="c46ecfda-be7b-4f42-9874-a8a94f71188f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.731495 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" podUID="1f1cd413-71e0-443e-95cf-e5d46a745b1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.731529 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.813192 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" podUID="b6c89c2e-a080-4d20-bc81-bda0f9eb17b6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.813488 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" podUID="1f1cd413-71e0-443e-95cf-e5d46a745b1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.813728 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.853048 4803 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-th8dv container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.853335 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" podUID="7a6eb50d-a8af-4e53-a129-aee15ae61037" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.33:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.896011 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" podUID="7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.896022 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" podUID="35783fb5-ef1c-4b33-beb1-af9fee8512d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.977999 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" podUID="662a79ef-9928-408c-8cfb-62945e0b6725" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.978014 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.978154 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:03 crc kubenswrapper[4803]: I0127 23:07:03.978240 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.147030 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" podUID="c46ecfda-be7b-4f42-9874-a8a94f71188f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.229027 4803 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-nfxjq container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.229105 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" podUID="5b3c1908-cc42-4af3-a73d-916466d38dd6" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.312832 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" podUID="b6c89c2e-a080-4d20-bc81-bda0f9eb17b6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.313387 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" podUID="eae71f44-8628-4436-be64-9ac3aa8f9255" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.397030 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" podUID="9dde9803-1302-4f0f-a353-1313e3696d7b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.397322 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" podUID="7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.397472 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.480044 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" podUID="7b65a167-f9c8-475c-be5b-39e0502352ab" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.480511 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.480556 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.562050 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.562102 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" podUID="57c28f35-52f1-48aa-ad74-3f66a5cdd52c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.562122 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" podUID="0592ab2d-4ade-4747-a823-73cd5dcac047" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.562193 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" podUID="0592ab2d-4ade-4747-a823-73cd5dcac047" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.562214 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" podUID="35742b16-a222-4602-ae0a-d078eafb1ea1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.562135 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.562268 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.562298 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.644028 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" podUID="e163066d-c764-49e0-9119-cbeb4f4fe50b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.644039 4803 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-nfxjq container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.37:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.644107 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" podUID="5b3c1908-cc42-4af3-a73d-916466d38dd6" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.644159 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.644430 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" podUID="eae71f44-8628-4436-be64-9ac3aa8f9255" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.644512 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-7948f6cfb4-mpkbs" podUID="9dde9803-1302-4f0f-a353-1313e3696d7b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.644800 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" podUID="7b65a167-f9c8-475c-be5b-39e0502352ab" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.644868 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" podUID="57c28f35-52f1-48aa-ad74-3f66a5cdd52c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.645264 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" podUID="e163066d-c764-49e0-9119-cbeb4f4fe50b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.645320 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.645467 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" podUID="35742b16-a222-4602-ae0a-d078eafb1ea1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.747557 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"cd22b2e4ca8aa1dbc483fc088e5ece9d993383c7668255cf22bf0281a9f959a9"} pod="openshift-operators/observability-operator-59bdc8b94-skn2q" containerMessage="Container operator failed liveness probe, will be restarted" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.749365 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" containerID="cri-o://cd22b2e4ca8aa1dbc483fc088e5ece9d993383c7668255cf22bf0281a9f959a9" gracePeriod=30 Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.777046 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" podUID="c46ecfda-be7b-4f42-9874-a8a94f71188f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.784628 4803 patch_prober.go:28] interesting pod/metrics-server-5dc8cc774c-42hcg container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.784698 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" podUID="f978ff10-12ad-4883-98d9-7ce831fad147" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.784743 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.786763 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"01d358f5c285efb0d85a58dc84fe3ddf3c305b211f25861b4e7f911bf4fbca0f"} pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" containerMessage="Container metrics-server failed liveness probe, will be restarted" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.786801 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" podUID="f978ff10-12ad-4883-98d9-7ce831fad147" containerName="metrics-server" containerID="cri-o://01d358f5c285efb0d85a58dc84fe3ddf3c305b211f25861b4e7f911bf4fbca0f" gracePeriod=170 Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.791758 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-9crs2" podUID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.792820 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-9crs2" podUID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:04 crc kubenswrapper[4803]: I0127 23:07:04.856050 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" podUID="1f1cd413-71e0-443e-95cf-e5d46a745b1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.020092 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.020478 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.241833 4803 patch_prober.go:28] interesting pod/monitoring-plugin-8d685d9cc-c64j5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.242160 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" podUID="354a68b0-46f4-4cae-afbe-c5ef5fba4bdf" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.242278 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.254332 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.254414 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.283077 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" podUID="2beb4659-d63e-495f-a32f-f94cbcbbc1ce" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.283122 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.283202 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.283187 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.310086 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="bfd832f4-d1c8-4283-b3cb-55cd225022e4" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.14:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.310140 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="bfd832f4-d1c8-4283-b3cb-55cd225022e4" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.14:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.440146 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" podUID="7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.481185 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" podUID="5bedb1c3-9c5a-4137-851d-33b1723a3221" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.481363 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.562480 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.562539 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.641261 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" podUID="038e0b5a-3e3b-462b-83ca-c9865b6f4240" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.641260 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.641390 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.641403 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.682232 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" podUID="35742b16-a222-4602-ae0a-d078eafb1ea1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.682260 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" podUID="038e0b5a-3e3b-462b-83ca-c9865b6f4240" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.682753 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.723067 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" podUID="e163066d-c764-49e0-9119-cbeb4f4fe50b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.756182 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"36addb28749ee510ca1933290c9ef068a58c6a9b2265b87526943933882b0385"} pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" containerMessage="Container webhook-server failed liveness probe, will be restarted" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.756232 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" podUID="038e0b5a-3e3b-462b-83ca-c9865b6f4240" containerName="webhook-server" containerID="cri-o://36addb28749ee510ca1933290c9ef068a58c6a9b2265b87526943933882b0385" gracePeriod=2 Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.759658 4803 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.759695 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.765097 4803 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-nfxjq container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.37:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.765159 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" podUID="5b3c1908-cc42-4af3-a73d-916466d38dd6" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.786269 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="6c78b382-5735-4741-b087-cefda68053f4" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.786376 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.786269 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="6c78b382-5735-4741-b087-cefda68053f4" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.786669 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.788025 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.789391 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"3abfa89db2c69b77e3243b70fc7639be8d55df5685260f5eaf42b68c83d1de7f"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.793332 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-74nng" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.793412 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-hg2h2" podUID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.793517 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-hg2h2" podUID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.835033 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" podUID="62a498d3-45eb-4117-ba22-041e8d90762d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.835135 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 23:07:05 crc kubenswrapper[4803]: I0127 23:07:05.907652 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.063060 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.063128 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.104050 4803 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.104115 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.104220 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.114283 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.114440 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37" gracePeriod=30 Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.243636 4803 patch_prober.go:28] interesting pod/monitoring-plugin-8d685d9cc-c64j5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.243694 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" podUID="354a68b0-46f4-4cae-afbe-c5ef5fba4bdf" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.325036 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" podUID="2beb4659-d63e-495f-a32f-f94cbcbbc1ce" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.453053 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" podUID="ceff729d-b83b-45b4-99ef-d11ef9570efb" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.453100 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" podUID="ceff729d-b83b-45b4-99ef-d11ef9570efb" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.453169 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.453236 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.454250 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"5b12992b803de9e1b315d60a241173e03758c3ba53973d8bdeeb283abbc8275a"} pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.454305 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" podUID="ceff729d-b83b-45b4-99ef-d11ef9570efb" containerName="frr-k8s-webhook-server" containerID="cri-o://5b12992b803de9e1b315d60a241173e03758c3ba53973d8bdeeb283abbc8275a" gracePeriod=10 Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.524333 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.524382 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.533398 4803 patch_prober.go:28] interesting pod/thanos-querier-7fd45b674-f8ngk container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.533466 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" podUID="f118d287-ae55-421d-9b9a-050b79b6692b" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.683110 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" podUID="038e0b5a-3e3b-462b-83ca-c9865b6f4240" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.795748 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-9nds5" podUID="f28d4382-79f1-4254-a4fa-fced45178594" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.796686 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-9nds5" podUID="f28d4382-79f1-4254-a4fa-fced45178594" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.877044 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" podUID="62a498d3-45eb-4117-ba22-041e8d90762d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.905989 4803 patch_prober.go:28] interesting pod/controller-manager-7df488d7f-9qs98 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.906069 4803 patch_prober.go:28] interesting pod/controller-manager-7df488d7f-9qs98 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.906134 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.906180 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.913097 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"626bf7d2d2063c31dee0a7ff5af68e33526fb9a8872300b9a8c319817233a878"} pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" containerMessage="Container controller-manager failed liveness probe, will be restarted" Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.913151 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" containerID="cri-o://626bf7d2d2063c31dee0a7ff5af68e33526fb9a8872300b9a8c319817233a878" gracePeriod=30 Jan 27 23:07:06 crc kubenswrapper[4803]: I0127 23:07:06.919569 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.065114 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.065254 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.065554 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.066096 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-jsxr8" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.066213 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jsxr8" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.066231 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-jsxr8" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.068101 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"66c47e501dec82dfdaca29b5e31eb6b0bc321e1ca7f4e54e92ff3c5ea0a160b2"} pod="metallb-system/frr-k8s-jsxr8" containerMessage="Container controller failed liveness probe, will be restarted" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.068166 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"6a74dd1430ece5cbf4721aa93949fb5fbf67b71d4900faa0b21496b2bacfd72e"} pod="metallb-system/frr-k8s-jsxr8" containerMessage="Container frr failed liveness probe, will be restarted" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.068295 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="controller" containerID="cri-o://66c47e501dec82dfdaca29b5e31eb6b0bc321e1ca7f4e54e92ff3c5ea0a160b2" gracePeriod=2 Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.157057 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-2nc8h" podUID="802fd9e5-a4c1-4195-b95a-e8fde55cbe1c" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.98:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.157349 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-2nc8h" podUID="802fd9e5-a4c1-4195-b95a-e8fde55cbe1c" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.98:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.496183 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" podUID="ceff729d-b83b-45b4-99ef-d11ef9570efb" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.715487 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="f9122f89-a56c-47d7-ad05-9aab6acdcc2f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.715596 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="f9122f89-a56c-47d7-ad05-9aab6acdcc2f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.715762 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.725058 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" podUID="038e0b5a-3e3b-462b-83ca-c9865b6f4240" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.789472 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4493a984-e728-410f-9362-0795391f2793" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.789599 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.789970 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="4493a984-e728-410f-9362-0795391f2793" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.790012 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.792173 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-cwt95" podUID="1088c904-bd11-410d-963b-91425f9e2ee1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.792235 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-5ch2x" podUID="302d32b5-3246-4bbc-877e-700ecd30afbd" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.792318 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xfps2" podUID="3f1dc5cb-1275-4cf9-8c71-f9575161f73f" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.792384 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-5ch2x" podUID="302d32b5-3246-4bbc-877e-700ecd30afbd" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.792451 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-xfps2" podUID="3f1dc5cb-1275-4cf9-8c71-f9575161f73f" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.792542 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-5ch2x" podUID="302d32b5-3246-4bbc-877e-700ecd30afbd" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.796020 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-cwt95" podUID="1088c904-bd11-410d-963b-91425f9e2ee1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.796096 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-5ch2x" podUID="302d32b5-3246-4bbc-877e-700ecd30afbd" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.802842 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"b377002717e410ad179d88d9b643c5b6f14ddaabc67985dc331b619f08ea2116"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.811228 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlj5d container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.811282 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.811348 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.812089 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlj5d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.812190 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.812300 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.822235 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"90a62bdcb5a552347091f5153ec67d950d4493f9a3ac98b4bdc9806515e06dbf"} pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.822303 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" containerID="cri-o://90a62bdcb5a552347091f5153ec67d950d4493f9a3ac98b4bdc9806515e06dbf" gracePeriod=30 Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.906999 4803 patch_prober.go:28] interesting pod/oauth-openshift-769fc69b77-cp7hp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.907099 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" podUID="3446baa2-c061-41ff-9652-16734b5bb97a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.907004 4803 patch_prober.go:28] interesting pod/oauth-openshift-769fc69b77-cp7hp container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.907266 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.907295 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" podUID="3446baa2-c061-41ff-9652-16734b5bb97a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.907425 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 23:07:07 crc kubenswrapper[4803]: I0127 23:07:07.917573 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"2858e5cb08be19324a1c5c32c6c51bfafa2bf9f9357bbbe587d92af80f4560ee"} pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.027025 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-p9fmz" podUID="669fa453-18c2-4202-9ac3-117b6f000063" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.027023 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-p9fmz" podUID="669fa453-18c2-4202-9ac3-117b6f000063" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.108168 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.178248 4803 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.178315 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.178394 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.732054 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" podUID="e9d93e19-7c2b-4d53-bfe8-7b0157dec931" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.732173 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.785993 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4493a984-e728-410f-9362-0795391f2793" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.787041 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.787997 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.807206 4803 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=failed to establish etcd client: giving up getting a cached client after 3 tries Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.807619 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.808758 4803 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=failed to establish etcd client: giving up getting a cached client after 3 tries Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.808810 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.971197 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="69038d7c-7d07-4b92-a041-c27addfb7fba" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.213:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.971240 4803 patch_prober.go:28] interesting pod/oauth-openshift-769fc69b77-cp7hp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.971388 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" podUID="3446baa2-c061-41ff-9652-16734b5bb97a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:08 crc kubenswrapper[4803]: I0127 23:07:08.973265 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="69038d7c-7d07-4b92-a041-c27addfb7fba" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.213:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.524046 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.524290 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.524365 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.541808 4803 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-zr5dw container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.541935 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" podUID="dea15eec-6442-4acb-b40a-418dddb46623" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.542037 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.603031 4803 patch_prober.go:28] interesting pod/console-98b9df85f-f5gmm container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.140:8443/health\": context deadline exceeded" start-of-body= Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.603096 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-98b9df85f-f5gmm" podUID="fa470512-29ae-4707-abdb-a93dd93f6b58" containerName="console" probeResult="failure" output="Get \"https://10.217.0.140:8443/health\": context deadline exceeded" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.603149 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.604112 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"a2a44aa47f06462db5296bc332114eb143798cd5cc78761f3d8ca741e57e2138"} pod="openshift-console/console-98b9df85f-f5gmm" containerMessage="Container console failed liveness probe, will be restarted" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.636522 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-jsxr8" podUID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerName="frr" containerID="cri-o://6a74dd1430ece5cbf4721aa93949fb5fbf67b71d4900faa0b21496b2bacfd72e" gracePeriod=2 Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.775134 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" podUID="e9d93e19-7c2b-4d53-bfe8-7b0157dec931" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.775219 4803 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-q4xmw container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.775276 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" podUID="1e455314-8336-4d0e-a611-044952db08e7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.775359 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.786133 4803 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-bs4dm container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.786176 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" podUID="0323234b-6aa2-41ea-bf58-a4b3924d6e4a" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.786252 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.842116 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerDied","Data":"66c47e501dec82dfdaca29b5e31eb6b0bc321e1ca7f4e54e92ff3c5ea0a160b2"} Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.842348 4803 patch_prober.go:28] interesting pod/downloads-7954f5f757-9drvm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.842735 4803 generic.go:334] "Generic (PLEG): container finished" podID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerID="66c47e501dec82dfdaca29b5e31eb6b0bc321e1ca7f4e54e92ff3c5ea0a160b2" exitCode=137 Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.842414 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9drvm" podUID="1bc7c7ba-cad8-4f64-836e-a564b254e1fd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.847291 4803 generic.go:334] "Generic (PLEG): container finished" podID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerID="ec91d42bd8a135d0c614d6ed97e86acfb3222e35f87ebe79744ce38bff5ca16a" exitCode=0 Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.847614 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerDied","Data":"ec91d42bd8a135d0c614d6ed97e86acfb3222e35f87ebe79744ce38bff5ca16a"} Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.850692 4803 generic.go:334] "Generic (PLEG): container finished" podID="038e0b5a-3e3b-462b-83ca-c9865b6f4240" containerID="36addb28749ee510ca1933290c9ef068a58c6a9b2265b87526943933882b0385" exitCode=137 Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.850735 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" event={"ID":"038e0b5a-3e3b-462b-83ca-c9865b6f4240","Type":"ContainerDied","Data":"36addb28749ee510ca1933290c9ef068a58c6a9b2265b87526943933882b0385"} Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.883093 4803 patch_prober.go:28] interesting pod/downloads-7954f5f757-9drvm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.883155 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9drvm" podUID="1bc7c7ba-cad8-4f64-836e-a564b254e1fd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.883240 4803 patch_prober.go:28] interesting pod/console-operator-58897d9998-h9nvv container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.883257 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.883285 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.883587 4803 patch_prober.go:28] interesting pod/console-operator-58897d9998-h9nvv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.883703 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.883796 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.884701 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"5367198217cf89b564a5d7acd73e27cda5aee9cbcb8a6cf53aa9e5a1f104c01b"} pod="openshift-console-operator/console-operator-58897d9998-h9nvv" containerMessage="Container console-operator failed liveness probe, will be restarted" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.884734 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" containerID="cri-o://5367198217cf89b564a5d7acd73e27cda5aee9cbcb8a6cf53aa9e5a1f104c01b" gracePeriod=30 Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.998054 4803 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-kdr8w container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.998118 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" podUID="8f8b8ad1-f276-4546-afd2-49f338f38c92" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:09 crc kubenswrapper[4803]: I0127 23:07:09.998169 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.012300 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"9acc22af19da35d55623b7276ae9fb7cc66d521319e7465ba5a43273849c52e6"} pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.012365 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" podUID="8f8b8ad1-f276-4546-afd2-49f338f38c92" containerName="authentication-operator" containerID="cri-o://9acc22af19da35d55623b7276ae9fb7cc66d521319e7465ba5a43273849c52e6" gracePeriod=30 Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.076973 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.077036 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.077083 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.077534 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.077563 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.077607 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.079627 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"6f5f6fac5c801bc3a3a53cce68a6e7540e4368954867442ecf96df6c74334241"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.079675 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" containerID="cri-o://6f5f6fac5c801bc3a3a53cce68a6e7540e4368954867442ecf96df6c74334241" gracePeriod=30 Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.165012 4803 patch_prober.go:28] interesting pod/loki-operator-controller-manager-b65d5f66c-f2bd5 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.165019 4803 patch_prober.go:28] interesting pod/loki-operator-controller-manager-b65d5f66c-f2bd5 container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.165178 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" podUID="51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.165086 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" podUID="51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.165350 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.253912 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.253961 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.253996 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.254018 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.350109 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.350162 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.350202 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.352172 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.352228 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.352297 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.352414 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.352437 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.352457 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.352674 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.352709 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.352759 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.353735 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"feb93123fdc07e61b239c351f46ffeaa730a6aba9dab848ab0ad1892932af44d"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.353785 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" containerID="cri-o://feb93123fdc07e61b239c351f46ffeaa730a6aba9dab848ab0ad1892932af44d" gracePeriod=30 Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.355758 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"21f73c45e2f9012a699b50af081501f3fc1d57615e96de8b16ffb2f2ceadddf4"} pod="openshift-ingress/router-default-5444994796-mgtlh" containerMessage="Container router failed liveness probe, will be restarted" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.355837 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" containerID="cri-o://21f73c45e2f9012a699b50af081501f3fc1d57615e96de8b16ffb2f2ceadddf4" gracePeriod=10 Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.359650 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.359715 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.359770 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.359669 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.360287 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.360357 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.361481 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"a91190553c095a5a655cedcad893b5277d0342e2e628b51828b0ad56d7f737bc"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" containerMessage="Container olm-operator failed liveness probe, will be restarted" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.361518 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" containerID="cri-o://a91190553c095a5a655cedcad893b5277d0342e2e628b51828b0ad56d7f737bc" gracePeriod=30 Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.452109 4803 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-d65kn container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.452192 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" podUID="ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.452245 4803 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-d65kn container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.452293 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.452287 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" podUID="ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.452426 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.453633 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"990059e329695155ccc7ee8c252f9851bb40f482108c08e5c41d86dfd124c808"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.453680 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" podUID="ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9" containerName="package-server-manager" containerID="cri-o://990059e329695155ccc7ee8c252f9851bb40f482108c08e5c41d86dfd124c808" gracePeriod=30 Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.543239 4803 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-zr5dw container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.543313 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" podUID="dea15eec-6442-4acb-b40a-418dddb46623" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.561830 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.561858 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.561913 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.561968 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.604032 4803 patch_prober.go:28] interesting pod/console-98b9df85f-f5gmm container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.140:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.604420 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-98b9df85f-f5gmm" podUID="fa470512-29ae-4707-abdb-a93dd93f6b58" containerName="console" probeResult="failure" output="Get \"https://10.217.0.140:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.604518 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.695417 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-b65d5f66c-f2bd5" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.716450 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="f9122f89-a56c-47d7-ad05-9aab6acdcc2f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.760057 4803 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.760143 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="564d57a3-4f2a-46a9-928b-b77dc685d903" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.771328 4803 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.76:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.771386 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="a4c26ad1-a645-4746-9c19-c7bbda04000c" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.76:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.775839 4803 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-q4xmw container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.775991 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" podUID="1e455314-8336-4d0e-a611-044952db08e7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.787253 4803 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-bs4dm container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.787318 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" podUID="0323234b-6aa2-41ea-bf58-a4b3924d6e4a" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.790117 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-tp8d4" podUID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.790224 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.791198 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-tp8d4" podUID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.791277 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.791597 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17"} pod="openstack-operators/openstack-operator-index-tp8d4" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.791645 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-tp8d4" podUID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerName="registry-server" containerID="cri-o://81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17" gracePeriod=30 Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.826095 4803 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.81:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.826246 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="6efa3b11-b2ea-4f6d-87d2-177229718026" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.81:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:10 crc kubenswrapper[4803]: E0127 23:07:10.828618 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 23:07:10 crc kubenswrapper[4803]: E0127 23:07:10.832119 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 23:07:10 crc kubenswrapper[4803]: E0127 23:07:10.834396 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 23:07:10 crc kubenswrapper[4803]: E0127 23:07:10.834465 4803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-tp8d4" podUID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerName="registry-server" Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.865620 4803 generic.go:334] "Generic (PLEG): container finished" podID="69126409-4642-4d42-855d-e7325b3de7c5" containerID="cd22b2e4ca8aa1dbc483fc088e5ece9d993383c7668255cf22bf0281a9f959a9" exitCode=0 Jan 27 23:07:10 crc kubenswrapper[4803]: I0127 23:07:10.865704 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" event={"ID":"69126409-4642-4d42-855d-e7325b3de7c5","Type":"ContainerDied","Data":"cd22b2e4ca8aa1dbc483fc088e5ece9d993383c7668255cf22bf0281a9f959a9"} Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.017088 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-99277" podUID="021b5278-1b81-43b3-ae44-ec231fb77687" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.46:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.021244 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.494026 4803 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-d65kn container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.494609 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" podUID="ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.534392 4803 patch_prober.go:28] interesting pod/thanos-querier-7fd45b674-f8ngk container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.534506 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7fd45b674-f8ngk" podUID="f118d287-ae55-421d-9b9a-050b79b6692b" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.877395 4803 patch_prober.go:28] interesting pod/route-controller-manager-c4b5fc665-k52v8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.877697 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podUID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.877413 4803 patch_prober.go:28] interesting pod/route-controller-manager-c4b5fc665-k52v8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.877743 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.877793 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podUID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.879596 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"adf436f517d444c036e20dd4e0eb30efbe4e95022d94c8064e2f9cbfeeb56f1b"} pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.879668 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podUID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerName="route-controller-manager" containerID="cri-o://adf436f517d444c036e20dd4e0eb30efbe4e95022d94c8064e2f9cbfeeb56f1b" gracePeriod=30 Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.920040 4803 generic.go:334] "Generic (PLEG): container finished" podID="0f079c02-e2f3-4dc3-aad2-86c70d3d41e8" containerID="6a74dd1430ece5cbf4721aa93949fb5fbf67b71d4900faa0b21496b2bacfd72e" exitCode=143 Jan 27 23:07:11 crc kubenswrapper[4803]: I0127 23:07:11.920098 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerDied","Data":"6a74dd1430ece5cbf4721aa93949fb5fbf67b71d4900faa0b21496b2bacfd72e"} Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.028451 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.028511 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.028590 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.028607 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.028666 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.028692 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.030095 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"68a1735763950ee03fe69618654ce8e6975629d83b38f0c28c49523e11400654"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.030138 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" containerID="cri-o://68a1735763950ee03fe69618654ce8e6975629d83b38f0c28c49523e11400654" gracePeriod=30 Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.078677 4803 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-bqlpm container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.90:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.078742 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" podUID="77dd058d-f38b-4382-923d-f68fbb3c9566" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.90:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.078831 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.354047 4803 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.354562 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.524321 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.524383 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.545586 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": dial tcp 10.217.0.11:8081: connect: connection refused" start-of-body= Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.545639 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": dial tcp 10.217.0.11:8081: connect: connection refused" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.790040 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wrzxs" podUID="89a353b4-798b-4f55-91ff-316a9840a7bb" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.866107 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" podUID="eac7ef2c-904d-429b-ac3f-a43a72339fde" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.866222 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.919150 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" podUID="c6f78887-1cda-463f-ab3f-57703bfb7a41" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.919297 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.956895 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-h9nvv_61adce3e-cfdd-4a33-b64d-f49069ef6469/console-operator/0.log" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.958058 4803 generic.go:334] "Generic (PLEG): container finished" podID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerID="5367198217cf89b564a5d7acd73e27cda5aee9cbcb8a6cf53aa9e5a1f104c01b" exitCode=1 Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.958243 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" event={"ID":"61adce3e-cfdd-4a33-b64d-f49069ef6469","Type":"ContainerDied","Data":"5367198217cf89b564a5d7acd73e27cda5aee9cbcb8a6cf53aa9e5a1f104c01b"} Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.960393 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" podUID="51221b4b-024e-4134-8baa-a9478c8c596a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.961155 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.964316 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" event={"ID":"038e0b5a-3e3b-462b-83ca-c9865b6f4240","Type":"ContainerStarted","Data":"ed44c4086da9da3b85e37de1f34d7127f00ced040b870cee65000b33a9dd6697"} Jan 27 23:07:12 crc kubenswrapper[4803]: I0127 23:07:12.965137 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.001620 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" podUID="f8498dfc-1b67-4783-9389-10d5b30b2860" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.001746 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.028980 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.029054 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.120011 4803 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-bqlpm container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.90:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.120034 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" podUID="9c6792d4-9d18-4d1c-b855-65aba5ae4919" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.120073 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" podUID="77dd058d-f38b-4382-923d-f68fbb3c9566" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.90:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.120167 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.162048 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" podUID="29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.162185 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.203155 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" podUID="47dce22a-001c-4774-ab99-28cd85420e1c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.203274 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.203326 4803 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-qn26k container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.62:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.203388 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" podUID="c5087ca2-7fa8-4a3e-b1bb-25335a4ed927" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.62:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.203430 4803 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-qn26k container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.62:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.203492 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-qn26k" podUID="c5087ca2-7fa8-4a3e-b1bb-25335a4ed927" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.62:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.206235 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.438069 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" podUID="1f1cd413-71e0-443e-95cf-e5d46a745b1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.438102 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" podUID="662a79ef-9928-408c-8cfb-62945e0b6725" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.438677 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.438134 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-26gcs" podUID="35783fb5-ef1c-4b33-beb1-af9fee8512d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.480068 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" podUID="c46ecfda-be7b-4f42-9874-a8a94f71188f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.521078 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" podUID="b6c89c2e-a080-4d20-bc81-bda0f9eb17b6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.521205 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.699038 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" podUID="0592ab2d-4ade-4747-a823-73cd5dcac047" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.699169 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.741033 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" podUID="35742b16-a222-4602-ae0a-d078eafb1ea1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.783014 4803 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-nfxjq container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.37:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.783081 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" podUID="5b3c1908-cc42-4af3-a73d-916466d38dd6" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.790692 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.790805 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.790831 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.796795 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus" containerStatusID={"Type":"cri-o","ID":"b38a7e1bde06d99eb8a70c9e615c871d61b42fb709378ee424f8e73868221c9c"} pod="openshift-monitoring/prometheus-k8s-0" containerMessage="Container prometheus failed liveness probe, will be restarted" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.797269 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" containerID="cri-o://b38a7e1bde06d99eb8a70c9e615c871d61b42fb709378ee424f8e73868221c9c" gracePeriod=600 Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.826270 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" podUID="eae71f44-8628-4436-be64-9ac3aa8f9255" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.826406 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.854260 4803 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-th8dv container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.854311 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-th8dv" podUID="7a6eb50d-a8af-4e53-a129-aee15ae61037" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.33:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.897125 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" podUID="7b65a167-f9c8-475c-be5b-39e0502352ab" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.897267 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.939389 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" podUID="eac7ef2c-904d-429b-ac3f-a43a72339fde" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.981988 4803 generic.go:334] "Generic (PLEG): container finished" podID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerID="feb93123fdc07e61b239c351f46ffeaa730a6aba9dab848ab0ad1892932af44d" exitCode=0 Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.982058 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" event={"ID":"767d334b-3f70-4847-b45a-ccf0d7e2dc2b","Type":"ContainerDied","Data":"feb93123fdc07e61b239c351f46ffeaa730a6aba9dab848ab0ad1892932af44d"} Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.988106 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerStarted","Data":"b30cc24cd8eabf1112dea7ca32c85b17e45d6ff38e2c09af245838396b131565"} Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.991730 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" event={"ID":"69126409-4642-4d42-855d-e7325b3de7c5","Type":"ContainerStarted","Data":"da56f87e0542dcae8c006d4b59fc7fd3e3e3b6255feed98696face2a41603daa"} Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.992016 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.994176 4803 generic.go:334] "Generic (PLEG): container finished" podID="ceff729d-b83b-45b4-99ef-d11ef9570efb" containerID="5b12992b803de9e1b315d60a241173e03758c3ba53973d8bdeeb283abbc8275a" exitCode=0 Jan 27 23:07:13 crc kubenswrapper[4803]: I0127 23:07:13.994231 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" event={"ID":"ceff729d-b83b-45b4-99ef-d11ef9570efb","Type":"ContainerDied","Data":"5b12992b803de9e1b315d60a241173e03758c3ba53973d8bdeeb283abbc8275a"} Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.022121 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" podUID="c6f78887-1cda-463f-ab3f-57703bfb7a41" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.023013 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerStarted","Data":"ca6fc1b3c78bdc56818ba2db16c89210841de003525942495c15fff25ab3458e"} Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.063986 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" podUID="57c28f35-52f1-48aa-ad74-3f66a5cdd52c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.064061 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" podUID="51221b4b-024e-4134-8baa-a9478c8c596a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.064154 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.105195 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" podUID="f8498dfc-1b67-4783-9389-10d5b30b2860" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.105292 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": dial tcp 10.217.0.11:8081: connect: connection refused" start-of-body= Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.105356 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": dial tcp 10.217.0.11:8081: connect: connection refused" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.105530 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hcwxh" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.105590 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-75cd85946-nk8z5" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.161065 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" podUID="9c6792d4-9d18-4d1c-b855-65aba5ae4919" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.163986 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" podUID="29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.200169 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h9xdv" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.246144 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" podUID="47dce22a-001c-4774-ab99-28cd85420e1c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.410339 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gst8v" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.419183 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rzpc" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.603810 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" podUID="2beb4659-d63e-495f-a32f-f94cbcbbc1ce" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8080/readyz\": read tcp 10.217.0.2:48044->10.217.0.95:8080: read: connection reset by peer" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.792094 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-9crs2" podUID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.792535 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.792799 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-9crs2" podUID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.792933 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.793546 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"1c84092e5a169af46263a90c73f579ab311ad67ffe76af8648b49a818e27a622"} pod="openshift-marketplace/certified-operators-9crs2" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.793588 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9crs2" podUID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerName="registry-server" containerID="cri-o://1c84092e5a169af46263a90c73f579ab311ad67ffe76af8648b49a818e27a622" gracePeriod=30 Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.827079 4803 patch_prober.go:28] interesting pod/metrics-server-5dc8cc774c-42hcg container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.827133 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" podUID="f978ff10-12ad-4883-98d9-7ce831fad147" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:14 crc kubenswrapper[4803]: I0127 23:07:14.939063 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" podUID="7b65a167-f9c8-475c-be5b-39e0502352ab" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.021550 4803 status_manager.go:875] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T23:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T23:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb4ba389c387b989d42589e012b26e5087e092983e020a588397aa541d65796f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T21:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/available-featuregates\\\",\\\"name\\\":\\\"available-featuregates\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl88p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-7777fb866f-stngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": context deadline exceeded" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.037377 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-h9nvv_61adce3e-cfdd-4a33-b64d-f49069ef6469/console-operator/0.log" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.037452 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" event={"ID":"61adce3e-cfdd-4a33-b64d-f49069ef6469","Type":"ContainerStarted","Data":"4aac0d06c4a9c4c4022602f2ab2e8531d360f57dbf10e1bd1b4d6c02166d0deb"} Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.038231 4803 patch_prober.go:28] interesting pod/console-operator-58897d9998-h9nvv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.038247 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.038284 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.040536 4803 generic.go:334] "Generic (PLEG): container finished" podID="2beb4659-d63e-495f-a32f-f94cbcbbc1ce" containerID="9beec0dcd921f5de25004b6333c4745beacfaa117e7da813df6887bdf043a19e" exitCode=1 Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.040675 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" event={"ID":"2beb4659-d63e-495f-a32f-f94cbcbbc1ce","Type":"ContainerDied","Data":"9beec0dcd921f5de25004b6333c4745beacfaa117e7da813df6887bdf043a19e"} Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.041641 4803 scope.go:117] "RemoveContainer" containerID="9beec0dcd921f5de25004b6333c4745beacfaa117e7da813df6887bdf043a19e" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.043674 4803 generic.go:334] "Generic (PLEG): container finished" podID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerID="6f5f6fac5c801bc3a3a53cce68a6e7540e4368954867442ecf96df6c74334241" exitCode=0 Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.043737 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" event={"ID":"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6","Type":"ContainerDied","Data":"6f5f6fac5c801bc3a3a53cce68a6e7540e4368954867442ecf96df6c74334241"} Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.045794 4803 generic.go:334] "Generic (PLEG): container finished" podID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerID="68a1735763950ee03fe69618654ce8e6975629d83b38f0c28c49523e11400654" exitCode=0 Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.045882 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" event={"ID":"620f5cd9-d7ac-436d-8d1f-66617d4fe1a3","Type":"ContainerDied","Data":"68a1735763950ee03fe69618654ce8e6975629d83b38f0c28c49523e11400654"} Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.052081 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" event={"ID":"ceff729d-b83b-45b4-99ef-d11ef9570efb","Type":"ContainerStarted","Data":"9594dd55a8fdea1b705c25b59465ec7338d95baba6389d2861a03ab810032a77"} Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.052721 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.054824 4803 generic.go:334] "Generic (PLEG): container finished" podID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerID="a91190553c095a5a655cedcad893b5277d0342e2e628b51828b0ad56d7f737bc" exitCode=0 Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.054947 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" event={"ID":"25eb3de0-78b3-4e89-a860-9f1778060c50","Type":"ContainerDied","Data":"a91190553c095a5a655cedcad893b5277d0342e2e628b51828b0ad56d7f737bc"} Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.060787 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jsxr8" event={"ID":"0f079c02-e2f3-4dc3-aad2-86c70d3d41e8","Type":"ContainerStarted","Data":"7a9a3e416c17d96ac25e3637256381b0d8564aa50d7915212f1d1ff7d0f84010"} Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.060973 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jsxr8" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.065901 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" event={"ID":"767d334b-3f70-4847-b45a-ccf0d7e2dc2b","Type":"ContainerStarted","Data":"dfb8691a25b884c159474e92f63750b3ef7aeb365e24e76e9c2ae2b9132bc484"} Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.066459 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.106271 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" podUID="57c28f35-52f1-48aa-ad74-3f66a5cdd52c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.106712 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.106778 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.107092 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": dial tcp 10.217.0.11:8081: connect: connection refused" start-of-body= Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.107124 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": dial tcp 10.217.0.11:8081: connect: connection refused" Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.236471 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.239943 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.242016 4803 patch_prober.go:28] interesting pod/monitoring-plugin-8d685d9cc-c64j5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.242054 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" podUID="354a68b0-46f4-4cae-afbe-c5ef5fba4bdf" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.78:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.245471 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.245671 4803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-tp8d4" podUID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerName="registry-server" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.253244 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-shvtm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.253372 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-shvtm" podUID="bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.310856 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="bfd832f4-d1c8-4283-b3cb-55cd225022e4" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.14:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.311225 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.312319 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"9aa9015b9af26e69bbd95056c17e5027d063ba9ba5d845de8c477ebe94994e43"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.312355 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bfd832f4-d1c8-4283-b3cb-55cd225022e4" containerName="kube-state-metrics" containerID="cri-o://9aa9015b9af26e69bbd95056c17e5027d063ba9ba5d845de8c477ebe94994e43" gracePeriod=30 Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.319747 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-64f565f6ff-2xjcl" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.323603 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" podUID="5bedb1c3-9c5a-4137-851d-33b1723a3221" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": EOF" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.323728 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" podUID="5bedb1c3-9c5a-4137-851d-33b1723a3221" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": EOF" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.404993 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" probeResult="failure" output=< Jan 27 23:07:15 crc kubenswrapper[4803]: % Total % Received % Xferd Average Speed Time Time Time Current Jan 27 23:07:15 crc kubenswrapper[4803]: Dload Upload Total Spent Left Speed Jan 27 23:07:15 crc kubenswrapper[4803]: [166B blob data] Jan 27 23:07:15 crc kubenswrapper[4803]: curl: (22) The requested URL returned error: 503 Jan 27 23:07:15 crc kubenswrapper[4803]: > Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.410900 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b38a7e1bde06d99eb8a70c9e615c871d61b42fb709378ee424f8e73868221c9c" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.417298 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b38a7e1bde06d99eb8a70c9e615c871d61b42fb709378ee424f8e73868221c9c" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.420103 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b38a7e1bde06d99eb8a70c9e615c871d61b42fb709378ee424f8e73868221c9c" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.420163 4803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerName="prometheus" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.525054 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.525111 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.562329 4803 patch_prober.go:28] interesting pod/logging-loki-gateway-8597d8df56-dkqb6 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.562394 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-8597d8df56-dkqb6" podUID="806f03eb-fc44-4b50-953e-d4101abd8bc3" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.786290 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="6c78b382-5735-4741-b087-cefda68053f4" containerName="galera" probeResult="failure" output="command timed out" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.790510 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-hg2h2" podUID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.790722 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.791039 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-hg2h2" podUID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.791105 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.791143 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-74nng" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.792654 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"d374519d7ca3d13e35d08fadcf3fbdfacddd14cf09ffacc72c3799812099cd9f"} pod="openshift-marketplace/redhat-marketplace-hg2h2" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.792716 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hg2h2" podUID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerName="registry-server" containerID="cri-o://d374519d7ca3d13e35d08fadcf3fbdfacddd14cf09ffacc72c3799812099cd9f" gracePeriod=30 Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.795267 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d374519d7ca3d13e35d08fadcf3fbdfacddd14cf09ffacc72c3799812099cd9f" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.799513 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d374519d7ca3d13e35d08fadcf3fbdfacddd14cf09ffacc72c3799812099cd9f" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.843512 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d374519d7ca3d13e35d08fadcf3fbdfacddd14cf09ffacc72c3799812099cd9f" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 23:07:15 crc kubenswrapper[4803]: E0127 23:07:15.843581 4803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-hg2h2" podUID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerName="registry-server" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.908819 4803 patch_prober.go:28] interesting pod/controller-manager-7df488d7f-9qs98 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.908918 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.943274 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jsxr8" Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.946940 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 23:07:15 crc kubenswrapper[4803]: I0127 23:07:15.995972 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jsxr8" Jan 27 23:07:16 crc kubenswrapper[4803]: E0127 23:07:16.035790 4803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf37cfcbc_f864_4f97_804e_b5ba5313c347.slice/crio-conmon-626bf7d2d2063c31dee0a7ff5af68e33526fb9a8872300b9a8c319817233a878.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb438c007_ef5f_4ed3_8f81_c5ac6d0209ac.slice/crio-81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b1c25f0_10e5_41a3_81ca_aef5372a4d38.slice/crio-conmon-90a62bdcb5a552347091f5153ec67d950d4493f9a3ac98b4bdc9806515e06dbf.scope\": RecentStats: unable to find data in memory cache]" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.085437 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" event={"ID":"2beb4659-d63e-495f-a32f-f94cbcbbc1ce","Type":"ContainerStarted","Data":"494bea731dad9472d79206045162543ac79ada8e9f9196d739cb4f6f1396ef93"} Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.087123 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.093353 4803 generic.go:334] "Generic (PLEG): container finished" podID="bfd832f4-d1c8-4283-b3cb-55cd225022e4" containerID="9aa9015b9af26e69bbd95056c17e5027d063ba9ba5d845de8c477ebe94994e43" exitCode=2 Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.093461 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bfd832f4-d1c8-4283-b3cb-55cd225022e4","Type":"ContainerDied","Data":"9aa9015b9af26e69bbd95056c17e5027d063ba9ba5d845de8c477ebe94994e43"} Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.105988 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" event={"ID":"25eb3de0-78b3-4e89-a860-9f1778060c50","Type":"ContainerStarted","Data":"9e1e3f5c8057c623e795424dcae86e2ba8661f4b1934bd3d1f3e68b952d86636"} Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.107613 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.107698 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.107730 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.114686 4803 generic.go:334] "Generic (PLEG): container finished" podID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerID="626bf7d2d2063c31dee0a7ff5af68e33526fb9a8872300b9a8c319817233a878" exitCode=0 Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.114797 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" event={"ID":"f37cfcbc-f864-4f97-804e-b5ba5313c347","Type":"ContainerDied","Data":"626bf7d2d2063c31dee0a7ff5af68e33526fb9a8872300b9a8c319817233a878"} Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.119511 4803 generic.go:334] "Generic (PLEG): container finished" podID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerID="90a62bdcb5a552347091f5153ec67d950d4493f9a3ac98b4bdc9806515e06dbf" exitCode=0 Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.119580 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" event={"ID":"2b1c25f0-10e5-41a3-81ca-aef5372a4d38","Type":"ContainerDied","Data":"90a62bdcb5a552347091f5153ec67d950d4493f9a3ac98b4bdc9806515e06dbf"} Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.123645 4803 generic.go:334] "Generic (PLEG): container finished" podID="b438c007-ef5f-4ed3-8f81-c5ac6d0209ac" containerID="81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17" exitCode=0 Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.123694 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tp8d4" event={"ID":"b438c007-ef5f-4ed3-8f81-c5ac6d0209ac","Type":"ContainerDied","Data":"81f96678d41555d10e6d056adfb222922fa0a293fd3f672b8f2579ead22e9b17"} Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.129148 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" event={"ID":"31c328be-cd7e-48a1-bb8d-086bbe5f1dd6","Type":"ContainerStarted","Data":"1c4c4eeb93cc7f4276d95df7ce13e643e7d4b57417c3f1bd71acf2ed969d44d3"} Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.130819 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.132277 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.132330 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.136285 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" event={"ID":"620f5cd9-d7ac-436d-8d1f-66617d4fe1a3","Type":"ContainerStarted","Data":"9701b53aa4cb6a326cbc415b44c9b5af4e7f04835b89c8f9249723583c8cf979"} Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.138199 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.138313 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.138361 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.140173 4803 generic.go:334] "Generic (PLEG): container finished" podID="5bedb1c3-9c5a-4137-851d-33b1723a3221" containerID="1943fa1831b28dcb16a3c0da317dd192683eff0cc2a63cd98c4b4b469583a041" exitCode=1 Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.140241 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" event={"ID":"5bedb1c3-9c5a-4137-851d-33b1723a3221","Type":"ContainerDied","Data":"1943fa1831b28dcb16a3c0da317dd192683eff0cc2a63cd98c4b4b469583a041"} Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.141080 4803 scope.go:117] "RemoveContainer" containerID="1943fa1831b28dcb16a3c0da317dd192683eff0cc2a63cd98c4b4b469583a041" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.156554 4803 generic.go:334] "Generic (PLEG): container finished" podID="8f8b8ad1-f276-4546-afd2-49f338f38c92" containerID="9acc22af19da35d55623b7276ae9fb7cc66d521319e7465ba5a43273849c52e6" exitCode=0 Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.156677 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" event={"ID":"8f8b8ad1-f276-4546-afd2-49f338f38c92","Type":"ContainerDied","Data":"9acc22af19da35d55623b7276ae9fb7cc66d521319e7465ba5a43273849c52e6"} Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.163160 4803 generic.go:334] "Generic (PLEG): container finished" podID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerID="adf436f517d444c036e20dd4e0eb30efbe4e95022d94c8064e2f9cbfeeb56f1b" exitCode=0 Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.164373 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" event={"ID":"7cd4933d-5334-4da7-8a38-e0f42c85bfbe","Type":"ContainerDied","Data":"adf436f517d444c036e20dd4e0eb30efbe4e95022d94c8064e2f9cbfeeb56f1b"} Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.168320 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.168355 4803 patch_prober.go:28] interesting pod/console-operator-58897d9998-h9nvv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.168370 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.168407 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.342887 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.342958 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.342998 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.344211 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e195e4590bf4eb00374d7f4aa7585484d9570421738b754585197e9eadc6e0e7"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.344271 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://e195e4590bf4eb00374d7f4aa7585484d9570421738b754585197e9eadc6e0e7" gracePeriod=600 Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.406466 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.727966 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlj5d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" start-of-body= Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.728255 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.791344 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-9nds5" podUID="f28d4382-79f1-4254-a4fa-fced45178594" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.791661 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/community-operators-9nds5" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.794324 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"d06e6be93e46765a13fa6664692c7463799cde50407a37cfe737f3841cdd2b9c"} pod="openshift-marketplace/community-operators-9nds5" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 27 23:07:16 crc kubenswrapper[4803]: I0127 23:07:16.794380 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9nds5" podUID="f28d4382-79f1-4254-a4fa-fced45178594" containerName="registry-server" containerID="cri-o://d06e6be93e46765a13fa6664692c7463799cde50407a37cfe737f3841cdd2b9c" gracePeriod=30 Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.184348 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" event={"ID":"f37cfcbc-f864-4f97-804e-b5ba5313c347","Type":"ContainerStarted","Data":"53d8137ee0578e3acc5912f778119a6b0b0fb51ef4a89a85a44789bc94e67f07"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.185325 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.185583 4803 patch_prober.go:28] interesting pod/controller-manager-7df488d7f-9qs98 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.185625 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.189826 4803 generic.go:334] "Generic (PLEG): container finished" podID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerID="d374519d7ca3d13e35d08fadcf3fbdfacddd14cf09ffacc72c3799812099cd9f" exitCode=0 Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.189963 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg2h2" event={"ID":"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf","Type":"ContainerDied","Data":"d374519d7ca3d13e35d08fadcf3fbdfacddd14cf09ffacc72c3799812099cd9f"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.196600 4803 generic.go:334] "Generic (PLEG): container finished" podID="ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9" containerID="990059e329695155ccc7ee8c252f9851bb40f482108c08e5c41d86dfd124c808" exitCode=0 Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.196671 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" event={"ID":"ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9","Type":"ContainerDied","Data":"990059e329695155ccc7ee8c252f9851bb40f482108c08e5c41d86dfd124c808"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.196698 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" event={"ID":"ce9845c4-3cfb-4ef2-8d77-d1244fcc8ab9","Type":"ContainerStarted","Data":"a5d0269b32dfa298081746d70b84966bd6f038f818b979cb19e346476021733e"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.196821 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.203564 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="e195e4590bf4eb00374d7f4aa7585484d9570421738b754585197e9eadc6e0e7" exitCode=0 Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.203636 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"e195e4590bf4eb00374d7f4aa7585484d9570421738b754585197e9eadc6e0e7"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.203682 4803 scope.go:117] "RemoveContainer" containerID="78bbfda12420ffb8901798c8b0a0e391c88af6ffa70eb4f98595a8f819f28771" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.211944 4803 generic.go:334] "Generic (PLEG): container finished" podID="7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79" containerID="3771eb7ec233067d01cce0bdf1337e910915fcd4804be553d6224ba1157c2425" exitCode=1 Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.212029 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" event={"ID":"7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79","Type":"ContainerDied","Data":"3771eb7ec233067d01cce0bdf1337e910915fcd4804be553d6224ba1157c2425"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.212791 4803 scope.go:117] "RemoveContainer" containerID="3771eb7ec233067d01cce0bdf1337e910915fcd4804be553d6224ba1157c2425" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.229096 4803 generic.go:334] "Generic (PLEG): container finished" podID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerID="1c84092e5a169af46263a90c73f579ab311ad67ffe76af8648b49a818e27a622" exitCode=0 Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.229390 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9crs2" event={"ID":"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d","Type":"ContainerDied","Data":"1c84092e5a169af46263a90c73f579ab311ad67ffe76af8648b49a818e27a622"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.238147 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bfd832f4-d1c8-4283-b3cb-55cd225022e4","Type":"ContainerStarted","Data":"5c6df95d608f83e9c514c933f6e244e91a164c3c34b580ef17e0651a8922a126"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.238507 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.245558 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tp8d4" event={"ID":"b438c007-ef5f-4ed3-8f81-c5ac6d0209ac","Type":"ContainerStarted","Data":"2f2b342d5460ed9b574dc7e1d8374ff402fc37084f9a16f13ad1c1b93f0d435c"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.249279 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kdr8w" event={"ID":"8f8b8ad1-f276-4546-afd2-49f338f38c92","Type":"ContainerStarted","Data":"2323e0697a69ba6b2995abb3b12f3c7412d3ce959d120ae80fe315ebdd4327af"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.273950 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlj5d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" start-of-body= Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.274766 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.276274 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" event={"ID":"2b1c25f0-10e5-41a3-81ca-aef5372a4d38","Type":"ContainerStarted","Data":"88f79c4c15428b0eaf0858fc7a99639cf716ad800ad793204ab9f5b12b4cdc0f"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.276380 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.289565 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" event={"ID":"7cd4933d-5334-4da7-8a38-e0f42c85bfbe","Type":"ContainerStarted","Data":"89e3cff46c8390935448705284a0ab48930d65e63fb714aa8c22673877d8630a"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.290304 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.290806 4803 patch_prober.go:28] interesting pod/route-controller-manager-c4b5fc665-k52v8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.290948 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podUID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.300455 4803 generic.go:334] "Generic (PLEG): container finished" podID="57c28f35-52f1-48aa-ad74-3f66a5cdd52c" containerID="16a1f903c50b3c403b22ec000b847b2519ad4ad6ce01753bfec751cebe9c9a6e" exitCode=1 Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.300606 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" event={"ID":"57c28f35-52f1-48aa-ad74-3f66a5cdd52c","Type":"ContainerDied","Data":"16a1f903c50b3c403b22ec000b847b2519ad4ad6ce01753bfec751cebe9c9a6e"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.302359 4803 scope.go:117] "RemoveContainer" containerID="16a1f903c50b3c403b22ec000b847b2519ad4ad6ce01753bfec751cebe9c9a6e" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.314105 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" event={"ID":"5bedb1c3-9c5a-4137-851d-33b1723a3221","Type":"ContainerStarted","Data":"d641e3a40e40f225a58714e9f667a6252dec381a34c60aafad10c16986ec6899"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.315610 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.323227 4803 generic.go:334] "Generic (PLEG): container finished" podID="f28d4382-79f1-4254-a4fa-fced45178594" containerID="d06e6be93e46765a13fa6664692c7463799cde50407a37cfe737f3841cdd2b9c" exitCode=0 Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.323801 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nds5" event={"ID":"f28d4382-79f1-4254-a4fa-fced45178594","Type":"ContainerDied","Data":"d06e6be93e46765a13fa6664692c7463799cde50407a37cfe737f3841cdd2b9c"} Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.324352 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.324406 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.326059 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.326133 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.327489 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.327535 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.705673 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 23:07:17 crc kubenswrapper[4803]: I0127 23:07:17.722138 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-nxlck" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.155900 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="4493a984-e728-410f-9362-0795391f2793" containerName="galera" containerID="cri-o://b377002717e410ad179d88d9b643c5b6f14ddaabc67985dc331b619f08ea2116" gracePeriod=20 Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.178381 4803 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.178435 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.343788 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="6c78b382-5735-4741-b087-cefda68053f4" containerName="galera" containerID="cri-o://3abfa89db2c69b77e3243b70fc7639be8d55df5685260f5eaf42b68c83d1de7f" gracePeriod=18 Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.383461 4803 generic.go:334] "Generic (PLEG): container finished" podID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerID="cb4ba389c387b989d42589e012b26e5087e092983e020a588397aa541d65796f" exitCode=0 Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.383564 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" event={"ID":"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2","Type":"ContainerDied","Data":"cb4ba389c387b989d42589e012b26e5087e092983e020a588397aa541d65796f"} Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.383630 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" event={"ID":"bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2","Type":"ContainerStarted","Data":"e716d998645c26b91873b1be3d5bb2bbc2d8d85795b27e8f14d56d722cbe6b1e"} Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.383650 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.387242 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg2h2" event={"ID":"d6e32da0-91ce-49f6-8f4e-928b9fee6fdf","Type":"ContainerStarted","Data":"cf6cbab3fe9ee7e5f8bfdaafc3ac5ae73fc3b32d1aba8f05c2441fa3ecf6db79"} Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.388951 4803 generic.go:334] "Generic (PLEG): container finished" podID="f8498dfc-1b67-4783-9389-10d5b30b2860" containerID="948f488f78855df8da62f0f21630dbaf689211511c0157dc482e12cbbcea6c50" exitCode=1 Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.389003 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" event={"ID":"f8498dfc-1b67-4783-9389-10d5b30b2860","Type":"ContainerDied","Data":"948f488f78855df8da62f0f21630dbaf689211511c0157dc482e12cbbcea6c50"} Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.389965 4803 scope.go:117] "RemoveContainer" containerID="948f488f78855df8da62f0f21630dbaf689211511c0157dc482e12cbbcea6c50" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.404626 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" event={"ID":"7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79","Type":"ContainerStarted","Data":"f3da02a7d1a9c47b8e4dc7d28b27768d2276098bc9d90b9d02b40fb9840b48c5"} Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.405015 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.422617 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" event={"ID":"57c28f35-52f1-48aa-ad74-3f66a5cdd52c","Type":"ContainerStarted","Data":"e9159baa9f8cef18bb355ee05e569be42adee797ff3b2a0805dec3d594dffb6a"} Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.423769 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.431250 4803 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37" exitCode=0 Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.431399 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"61b07367ddeab610d3584572489b31ef96b298ac2ce8f9da939ce53037572d37"} Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.473866 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d"} Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.502658 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9crs2" event={"ID":"a5265b8b-6b21-4c52-be79-e6c2a2f94a1d","Type":"ContainerStarted","Data":"762050ae63b12fb4c98777507283093b61ed1557d0508c3c371ba56330b8aaf2"} Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.509781 4803 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlj5d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" start-of-body= Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.509878 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" podUID="2b1c25f0-10e5-41a3-81ca-aef5372a4d38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.533381 4803 patch_prober.go:28] interesting pod/controller-manager-7df488d7f-9qs98 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.533536 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" podUID="f37cfcbc-f864-4f97-804e-b5ba5313c347" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.535825 4803 patch_prober.go:28] interesting pod/route-controller-manager-c4b5fc665-k52v8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.535879 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podUID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.536053 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.536116 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.536194 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.536220 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.541764 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.541817 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.863903 4803 patch_prober.go:28] interesting pod/console-operator-58897d9998-h9nvv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.864290 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.864365 4803 patch_prober.go:28] interesting pod/console-operator-58897d9998-h9nvv container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 23:07:18 crc kubenswrapper[4803]: I0127 23:07:18.864383 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" podUID="61adce3e-cfdd-4a33-b64d-f49069ef6469" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.076624 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.076712 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.076819 4803 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dfdfn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.076885 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" podUID="31c328be-cd7e-48a1-bb8d-086bbe5f1dd6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.217036 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-q4xmw" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.228574 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zr5dw" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.289134 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Jan 27 23:07:19 crc kubenswrapper[4803]: [+]has-synced ok Jan 27 23:07:19 crc kubenswrapper[4803]: [-]process-running failed: reason withheld Jan 27 23:07:19 crc kubenswrapper[4803]: healthz check failed Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.289396 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.319633 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.319656 4803 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hmpmk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.319690 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.319716 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" podUID="767d334b-3f70-4847-b45a-ccf0d7e2dc2b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.322873 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-bs4dm" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.357837 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.357968 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.361009 4803 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qcx9g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.361061 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" podUID="25eb3de0-78b3-4e89-a860-9f1778060c50" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.395210 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.395282 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.519055 4803 generic.go:334] "Generic (PLEG): container finished" podID="7e1a6ace-a129-49c9-a417-8e3cff536f8f" containerID="b38a7e1bde06d99eb8a70c9e615c871d61b42fb709378ee424f8e73868221c9c" exitCode=0 Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.519147 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7e1a6ace-a129-49c9-a417-8e3cff536f8f","Type":"ContainerDied","Data":"b38a7e1bde06d99eb8a70c9e615c871d61b42fb709378ee424f8e73868221c9c"} Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.519206 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7e1a6ace-a129-49c9-a417-8e3cff536f8f","Type":"ContainerStarted","Data":"79cd3364d1b315fc4550bf445c5b01462aee456fb476fa470e70835d6998c842"} Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.524700 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"de54689271645b293ebfe171f0be232c6bc093d66fa2bbe05eb81d515ae66090"} Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.524811 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.531011 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nds5" event={"ID":"f28d4382-79f1-4254-a4fa-fced45178594","Type":"ContainerStarted","Data":"0f139069759c85035f2857f914e4c9cd8223b790f7fbab19685531d0c2d3e1e6"} Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.535654 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" event={"ID":"f8498dfc-1b67-4783-9389-10d5b30b2860","Type":"ContainerStarted","Data":"88b97ecd9f7b843fd155d64c0d4f5116e861c8e4c213a9a2ac3b93d5b4316caf"} Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.626578 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.636056 4803 trace.go:236] Trace[1170834065]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (27-Jan-2026 23:07:16.866) (total time: 2765ms): Jan 27 23:07:19 crc kubenswrapper[4803]: Trace[1170834065]: [2.765743754s] [2.765743754s] END Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.717316 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="3427d6c9-1902-41c1-8b41-fa9f2cc92dc7" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 23:07:19 crc kubenswrapper[4803]: I0127 23:07:19.765993 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 23:07:20 crc kubenswrapper[4803]: I0127 23:07:20.459220 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9crs2" podUID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:20 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:20 crc kubenswrapper[4803]: > Jan 27 23:07:20 crc kubenswrapper[4803]: I0127 23:07:20.548220 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-mgtlh_056beb8e-ab30-48dc-b00e-6c261269431f/router/0.log" Jan 27 23:07:20 crc kubenswrapper[4803]: I0127 23:07:20.548453 4803 generic.go:334] "Generic (PLEG): container finished" podID="056beb8e-ab30-48dc-b00e-6c261269431f" containerID="21f73c45e2f9012a699b50af081501f3fc1d57615e96de8b16ffb2f2ceadddf4" exitCode=137 Jan 27 23:07:20 crc kubenswrapper[4803]: I0127 23:07:20.548668 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mgtlh" event={"ID":"056beb8e-ab30-48dc-b00e-6c261269431f","Type":"ContainerDied","Data":"21f73c45e2f9012a699b50af081501f3fc1d57615e96de8b16ffb2f2ceadddf4"} Jan 27 23:07:20 crc kubenswrapper[4803]: I0127 23:07:20.621866 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 23:07:20 crc kubenswrapper[4803]: I0127 23:07:20.621914 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 23:07:20 crc kubenswrapper[4803]: I0127 23:07:20.878002 4803 patch_prober.go:28] interesting pod/route-controller-manager-c4b5fc665-k52v8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 27 23:07:20 crc kubenswrapper[4803]: I0127 23:07:20.878375 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" podUID="7cd4933d-5334-4da7-8a38-e0f42c85bfbe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.011602 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.011932 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-notification-agent" containerID="cri-o://b6e702a4a9cc100b2b9d048d40c8a324a207b7103d2498fa6d532ec86613d573" gracePeriod=30 Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.011996 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-central-agent" containerID="cri-o://b30cc24cd8eabf1112dea7ca32c85b17e45d6ff38e2c09af245838396b131565" gracePeriod=30 Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.012051 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="sg-core" containerID="cri-o://f6f817e6c8bdd38c60e602da2e5dd27bd3562ef47bd8954e48d1815a4be45144" gracePeriod=30 Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.012065 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="proxy-httpd" containerID="cri-o://2aae4bcf6852b4cdf1ff3ea2493b612c2475445d9f0c50593ef5735371daed0b" gracePeriod=30 Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.028011 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.028066 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.028098 4803 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hgn8v container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.028153 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" podUID="620f5cd9-d7ac-436d-8d1f-66617d4fe1a3" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.099077 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bqlpm" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.525233 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.525534 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.525750 4803 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-stngg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.525768 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" podUID="bddfdf1e-4748-467b-8c09-e9ea1d3ff6d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.575352 4803 generic.go:334] "Generic (PLEG): container finished" podID="4493a984-e728-410f-9362-0795391f2793" containerID="b377002717e410ad179d88d9b643c5b6f14ddaabc67985dc331b619f08ea2116" exitCode=0 Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.575433 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4493a984-e728-410f-9362-0795391f2793","Type":"ContainerDied","Data":"b377002717e410ad179d88d9b643c5b6f14ddaabc67985dc331b619f08ea2116"} Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.586417 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-mgtlh_056beb8e-ab30-48dc-b00e-6c261269431f/router/0.log" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.586551 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mgtlh" event={"ID":"056beb8e-ab30-48dc-b00e-6c261269431f","Type":"ContainerStarted","Data":"291afca46ffa66128d4b93f7f29b64d2235bbd03ed32353c25387a9ab4bdd360"} Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.600754 4803 generic.go:334] "Generic (PLEG): container finished" podID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerID="f6f817e6c8bdd38c60e602da2e5dd27bd3562ef47bd8954e48d1815a4be45144" exitCode=2 Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.600803 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerDied","Data":"f6f817e6c8bdd38c60e602da2e5dd27bd3562ef47bd8954e48d1815a4be45144"} Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.663060 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-74nng" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:21 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:21 crc kubenswrapper[4803]: > Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.675438 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hg2h2" podUID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:21 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:21 crc kubenswrapper[4803]: > Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.830585 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5qnbd" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.848207 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9nds5" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.850698 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9nds5" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.886901 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-pcnl7" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.912562 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-hxpmk" Jan 27 23:07:21 crc kubenswrapper[4803]: I0127 23:07:21.961954 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.082044 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7sjdg" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.121251 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-w8nw7" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.145250 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-t9ng6" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.146080 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="3427d6c9-1902-41c1-8b41-fa9f2cc92dc7" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.269102 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.269306 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.269336 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.337119 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-r5dqr" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.432767 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-t9zrn" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.504803 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qg2hw" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.545512 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": dial tcp 10.217.0.11:8081: connect: connection refused" start-of-body= Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.545570 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": dial tcp 10.217.0.11:8081: connect: connection refused" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.545646 4803 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-skn2q container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.11:8081/healthz\": dial tcp 10.217.0.11:8081: connect: connection refused" start-of-body= Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.545664 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" podUID="69126409-4642-4d42-855d-e7325b3de7c5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.11:8081/healthz\": dial tcp 10.217.0.11:8081: connect: connection refused" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.707727 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4493a984-e728-410f-9362-0795391f2793","Type":"ContainerStarted","Data":"38c474449021c006804ae91ca0c8cba4a50c4031f5a1d18503b56a32ea3c5f8c"} Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.708728 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-prltl" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.745184 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-nfxjq" Jan 27 23:07:22 crc kubenswrapper[4803]: I0127 23:07:22.862065 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9hlvn" Jan 27 23:07:23 crc kubenswrapper[4803]: I0127 23:07:23.033618 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9nds5" podUID="f28d4382-79f1-4254-a4fa-fced45178594" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:23 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:23 crc kubenswrapper[4803]: > Jan 27 23:07:23 crc kubenswrapper[4803]: I0127 23:07:23.297426 4803 patch_prober.go:28] interesting pod/router-default-5444994796-mgtlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 23:07:23 crc kubenswrapper[4803]: [+]has-synced ok Jan 27 23:07:23 crc kubenswrapper[4803]: [+]process-running ok Jan 27 23:07:23 crc kubenswrapper[4803]: healthz check failed Jan 27 23:07:23 crc kubenswrapper[4803]: I0127 23:07:23.297478 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mgtlh" podUID="056beb8e-ab30-48dc-b00e-6c261269431f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 23:07:23 crc kubenswrapper[4803]: I0127 23:07:23.717559 4803 generic.go:334] "Generic (PLEG): container finished" podID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerID="2aae4bcf6852b4cdf1ff3ea2493b612c2475445d9f0c50593ef5735371daed0b" exitCode=0 Jan 27 23:07:23 crc kubenswrapper[4803]: I0127 23:07:23.717626 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerDied","Data":"2aae4bcf6852b4cdf1ff3ea2493b612c2475445d9f0c50593ef5735371daed0b"} Jan 27 23:07:24 crc kubenswrapper[4803]: I0127 23:07:24.251724 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-8d685d9cc-c64j5" Jan 27 23:07:24 crc kubenswrapper[4803]: I0127 23:07:24.272823 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 23:07:24 crc kubenswrapper[4803]: E0127 23:07:24.304954 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3abfa89db2c69b77e3243b70fc7639be8d55df5685260f5eaf42b68c83d1de7f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 23:07:24 crc kubenswrapper[4803]: E0127 23:07:24.321636 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3abfa89db2c69b77e3243b70fc7639be8d55df5685260f5eaf42b68c83d1de7f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 23:07:24 crc kubenswrapper[4803]: E0127 23:07:24.353808 4803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3abfa89db2c69b77e3243b70fc7639be8d55df5685260f5eaf42b68c83d1de7f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 23:07:24 crc kubenswrapper[4803]: E0127 23:07:24.353897 4803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="6c78b382-5735-4741-b087-cefda68053f4" containerName="galera" Jan 27 23:07:24 crc kubenswrapper[4803]: I0127 23:07:24.422013 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt" Jan 27 23:07:24 crc kubenswrapper[4803]: I0127 23:07:24.611672 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86894678c6-4f29p" Jan 27 23:07:24 crc kubenswrapper[4803]: I0127 23:07:24.697154 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-stngg" Jan 27 23:07:24 crc kubenswrapper[4803]: I0127 23:07:24.768275 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 23:07:24 crc kubenswrapper[4803]: I0127 23:07:24.786963 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mgtlh" Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.129985 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="3427d6c9-1902-41c1-8b41-fa9f2cc92dc7" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.130420 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.132010 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"e4447e5dbe20b2f3719136a7f97068001abb3a38ede778b798104196088ed509"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.132080 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3427d6c9-1902-41c1-8b41-fa9f2cc92dc7" containerName="cinder-scheduler" containerID="cri-o://e4447e5dbe20b2f3719136a7f97068001abb3a38ede778b798104196088ed509" gracePeriod=30 Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.218128 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.224562 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.377462 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tl69d" Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.466436 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.780857 4803 generic.go:334] "Generic (PLEG): container finished" podID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerID="b6e702a4a9cc100b2b9d048d40c8a324a207b7103d2498fa6d532ec86613d573" exitCode=0 Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.780890 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerDied","Data":"b6e702a4a9cc100b2b9d048d40c8a324a207b7103d2498fa6d532ec86613d573"} Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.912513 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7df488d7f-9qs98" Jan 27 23:07:25 crc kubenswrapper[4803]: I0127 23:07:25.948894 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jsxr8" Jan 27 23:07:26 crc kubenswrapper[4803]: I0127 23:07:26.134442 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 23:07:26 crc kubenswrapper[4803]: I0127 23:07:26.135788 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 23:07:26 crc kubenswrapper[4803]: I0127 23:07:26.377263 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-tp8d4" Jan 27 23:07:26 crc kubenswrapper[4803]: I0127 23:07:26.516156 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.17:3000/\": dial tcp 10.217.1.17:3000: connect: connection refused" Jan 27 23:07:26 crc kubenswrapper[4803]: I0127 23:07:26.732164 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vlj5d" Jan 27 23:07:26 crc kubenswrapper[4803]: I0127 23:07:26.833971 4803 generic.go:334] "Generic (PLEG): container finished" podID="6c78b382-5735-4741-b087-cefda68053f4" containerID="3abfa89db2c69b77e3243b70fc7639be8d55df5685260f5eaf42b68c83d1de7f" exitCode=0 Jan 27 23:07:26 crc kubenswrapper[4803]: I0127 23:07:26.834079 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c78b382-5735-4741-b087-cefda68053f4","Type":"ContainerDied","Data":"3abfa89db2c69b77e3243b70fc7639be8d55df5685260f5eaf42b68c83d1de7f"} Jan 27 23:07:27 crc kubenswrapper[4803]: I0127 23:07:27.849560 4803 generic.go:334] "Generic (PLEG): container finished" podID="3427d6c9-1902-41c1-8b41-fa9f2cc92dc7" containerID="e4447e5dbe20b2f3719136a7f97068001abb3a38ede778b798104196088ed509" exitCode=0 Jan 27 23:07:27 crc kubenswrapper[4803]: I0127 23:07:27.849644 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7","Type":"ContainerDied","Data":"e4447e5dbe20b2f3719136a7f97068001abb3a38ede778b798104196088ed509"} Jan 27 23:07:27 crc kubenswrapper[4803]: I0127 23:07:27.853488 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c78b382-5735-4741-b087-cefda68053f4","Type":"ContainerStarted","Data":"aaacea5ea26d723a84b81ce53836d0475e8f3cb6efa9552869ed1c86ac098428"} Jan 27 23:07:28 crc kubenswrapper[4803]: I0127 23:07:28.872722 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-h9nvv" Jan 27 23:07:29 crc kubenswrapper[4803]: I0127 23:07:29.087211 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfdfn" Jan 27 23:07:29 crc kubenswrapper[4803]: I0127 23:07:29.336835 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hmpmk" Jan 27 23:07:29 crc kubenswrapper[4803]: I0127 23:07:29.363369 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qcx9g" Jan 27 23:07:29 crc kubenswrapper[4803]: I0127 23:07:29.453067 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 23:07:29 crc kubenswrapper[4803]: I0127 23:07:29.804437 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 23:07:30 crc kubenswrapper[4803]: I0127 23:07:30.425818 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 23:07:30 crc kubenswrapper[4803]: I0127 23:07:30.505331 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9crs2" podUID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:30 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:30 crc kubenswrapper[4803]: > Jan 27 23:07:30 crc kubenswrapper[4803]: I0127 23:07:30.882522 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c4b5fc665-k52v8" Jan 27 23:07:30 crc kubenswrapper[4803]: I0127 23:07:30.894224 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3427d6c9-1902-41c1-8b41-fa9f2cc92dc7","Type":"ContainerStarted","Data":"97d61aba3d016e34ff4ccf063afeb17f3f13f75b1fa4207887af00627fae9cb7"} Jan 27 23:07:31 crc kubenswrapper[4803]: I0127 23:07:31.036092 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hgn8v" Jan 27 23:07:31 crc kubenswrapper[4803]: I0127 23:07:31.639289 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-74nng" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:31 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:31 crc kubenswrapper[4803]: > Jan 27 23:07:31 crc kubenswrapper[4803]: I0127 23:07:31.694008 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hg2h2" podUID="d6e32da0-91ce-49f6-8f4e-928b9fee6fdf" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:31 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:31 crc kubenswrapper[4803]: > Jan 27 23:07:31 crc kubenswrapper[4803]: I0127 23:07:31.901395 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mhx4f"] Jan 27 23:07:31 crc kubenswrapper[4803]: I0127 23:07:31.911818 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:31 crc kubenswrapper[4803]: I0127 23:07:31.939905 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mhx4f"] Jan 27 23:07:31 crc kubenswrapper[4803]: I0127 23:07:31.975063 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2sffc" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.087033 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm99m\" (UniqueName: \"kubernetes.io/projected/65b25ebb-046c-47af-b45a-5da95b17f7d5-kube-api-access-bm99m\") pod \"redhat-operators-mhx4f\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.087308 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-utilities\") pod \"redhat-operators-mhx4f\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.087370 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-catalog-content\") pod \"redhat-operators-mhx4f\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.189990 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm99m\" (UniqueName: \"kubernetes.io/projected/65b25ebb-046c-47af-b45a-5da95b17f7d5-kube-api-access-bm99m\") pod \"redhat-operators-mhx4f\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.190092 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-utilities\") pod \"redhat-operators-mhx4f\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.190145 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-catalog-content\") pod \"redhat-operators-mhx4f\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.190731 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-catalog-content\") pod \"redhat-operators-mhx4f\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.191072 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-utilities\") pod \"redhat-operators-mhx4f\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.210358 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm99m\" (UniqueName: \"kubernetes.io/projected/65b25ebb-046c-47af-b45a-5da95b17f7d5-kube-api-access-bm99m\") pod \"redhat-operators-mhx4f\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.238892 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.546897 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-skn2q" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.902005 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9nds5" podUID="f28d4382-79f1-4254-a4fa-fced45178594" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:32 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:32 crc kubenswrapper[4803]: > Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.902747 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-tz8ql" Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.918263 4803 generic.go:334] "Generic (PLEG): container finished" podID="9af7a299-6a76-452c-854d-d80a082dabf1" containerID="568fdfc6d7ee210678a5bb46f952c124af7b6c37d3b707be49cf4faee7e1f065" exitCode=1 Jan 27 23:07:32 crc kubenswrapper[4803]: I0127 23:07:32.918303 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9af7a299-6a76-452c-854d-d80a082dabf1","Type":"ContainerDied","Data":"568fdfc6d7ee210678a5bb46f952c124af7b6c37d3b707be49cf4faee7e1f065"} Jan 27 23:07:33 crc kubenswrapper[4803]: I0127 23:07:33.471722 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mhx4f"] Jan 27 23:07:33 crc kubenswrapper[4803]: W0127 23:07:33.479912 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65b25ebb_046c_47af_b45a_5da95b17f7d5.slice/crio-ce635517d275b14c3b807d884f3fb09e57f273f480d7d174eca13d8b692e15d3 WatchSource:0}: Error finding container ce635517d275b14c3b807d884f3fb09e57f273f480d7d174eca13d8b692e15d3: Status 404 returned error can't find the container with id ce635517d275b14c3b807d884f3fb09e57f273f480d7d174eca13d8b692e15d3 Jan 27 23:07:33 crc kubenswrapper[4803]: I0127 23:07:33.705235 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" podUID="3446baa2-c061-41ff-9652-16734b5bb97a" containerName="oauth-openshift" containerID="cri-o://2858e5cb08be19324a1c5c32c6c51bfafa2bf9f9357bbbe587d92af80f4560ee" gracePeriod=15 Jan 27 23:07:33 crc kubenswrapper[4803]: I0127 23:07:33.954167 4803 generic.go:334] "Generic (PLEG): container finished" podID="3446baa2-c061-41ff-9652-16734b5bb97a" containerID="2858e5cb08be19324a1c5c32c6c51bfafa2bf9f9357bbbe587d92af80f4560ee" exitCode=0 Jan 27 23:07:33 crc kubenswrapper[4803]: I0127 23:07:33.954231 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" event={"ID":"3446baa2-c061-41ff-9652-16734b5bb97a","Type":"ContainerDied","Data":"2858e5cb08be19324a1c5c32c6c51bfafa2bf9f9357bbbe587d92af80f4560ee"} Jan 27 23:07:33 crc kubenswrapper[4803]: I0127 23:07:33.957718 4803 generic.go:334] "Generic (PLEG): container finished" podID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerID="a396bfe8c544523c53ca88b63377103055eba2f125c18f870ac77604928df612" exitCode=0 Jan 27 23:07:33 crc kubenswrapper[4803]: I0127 23:07:33.957840 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhx4f" event={"ID":"65b25ebb-046c-47af-b45a-5da95b17f7d5","Type":"ContainerDied","Data":"a396bfe8c544523c53ca88b63377103055eba2f125c18f870ac77604928df612"} Jan 27 23:07:33 crc kubenswrapper[4803]: I0127 23:07:33.957989 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhx4f" event={"ID":"65b25ebb-046c-47af-b45a-5da95b17f7d5","Type":"ContainerStarted","Data":"ce635517d275b14c3b807d884f3fb09e57f273f480d7d174eca13d8b692e15d3"} Jan 27 23:07:34 crc kubenswrapper[4803]: I0127 23:07:34.282058 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 23:07:34 crc kubenswrapper[4803]: I0127 23:07:34.282622 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 23:07:34 crc kubenswrapper[4803]: I0127 23:07:34.766069 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 23:07:34 crc kubenswrapper[4803]: I0127 23:07:34.972042 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" event={"ID":"3446baa2-c061-41ff-9652-16734b5bb97a","Type":"ContainerStarted","Data":"bfeb00e0f1b5f0bf68a1108b589c79ff953695dfee7da8c8c286f5cdc0acfbe7"} Jan 27 23:07:34 crc kubenswrapper[4803]: I0127 23:07:34.972274 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.074905 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.184125 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="3427d6c9-1902-41c1-8b41-fa9f2cc92dc7" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.261147 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.326932 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.334459 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-769fc69b77-cp7hp" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.429121 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"9af7a299-6a76-452c-854d-d80a082dabf1\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.429192 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ssh-key\") pod \"9af7a299-6a76-452c-854d-d80a082dabf1\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.429239 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-config-data\") pod \"9af7a299-6a76-452c-854d-d80a082dabf1\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.429274 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ca-certs\") pod \"9af7a299-6a76-452c-854d-d80a082dabf1\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.429393 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-temporary\") pod \"9af7a299-6a76-452c-854d-d80a082dabf1\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.429442 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config\") pod \"9af7a299-6a76-452c-854d-d80a082dabf1\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.442413 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.447028 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9af7a299-6a76-452c-854d-d80a082dabf1" (UID: "9af7a299-6a76-452c-854d-d80a082dabf1"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.442642 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-config-data" (OuterVolumeSpecName: "config-data") pod "9af7a299-6a76-452c-854d-d80a082dabf1" (UID: "9af7a299-6a76-452c-854d-d80a082dabf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.442204 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9af7a299-6a76-452c-854d-d80a082dabf1" (UID: "9af7a299-6a76-452c-854d-d80a082dabf1"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.533585 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrj99\" (UniqueName: \"kubernetes.io/projected/9af7a299-6a76-452c-854d-d80a082dabf1-kube-api-access-vrj99\") pod \"9af7a299-6a76-452c-854d-d80a082dabf1\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.534434 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-workdir\") pod \"9af7a299-6a76-452c-854d-d80a082dabf1\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.534646 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config-secret\") pod \"9af7a299-6a76-452c-854d-d80a082dabf1\" (UID: \"9af7a299-6a76-452c-854d-d80a082dabf1\") " Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.541832 4803 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.542123 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.542219 4803 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.560684 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9af7a299-6a76-452c-854d-d80a082dabf1" (UID: "9af7a299-6a76-452c-854d-d80a082dabf1"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.564385 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af7a299-6a76-452c-854d-d80a082dabf1-kube-api-access-vrj99" (OuterVolumeSpecName: "kube-api-access-vrj99") pod "9af7a299-6a76-452c-854d-d80a082dabf1" (UID: "9af7a299-6a76-452c-854d-d80a082dabf1"). InnerVolumeSpecName "kube-api-access-vrj99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.594648 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.606805 4803 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.646261 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrj99\" (UniqueName: \"kubernetes.io/projected/9af7a299-6a76-452c-854d-d80a082dabf1-kube-api-access-vrj99\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.646299 4803 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9af7a299-6a76-452c-854d-d80a082dabf1-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.646310 4803 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.663237 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9af7a299-6a76-452c-854d-d80a082dabf1" (UID: "9af7a299-6a76-452c-854d-d80a082dabf1"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.678545 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9af7a299-6a76-452c-854d-d80a082dabf1" (UID: "9af7a299-6a76-452c-854d-d80a082dabf1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.698017 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9af7a299-6a76-452c-854d-d80a082dabf1" (UID: "9af7a299-6a76-452c-854d-d80a082dabf1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.742928 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9af7a299-6a76-452c-854d-d80a082dabf1" (UID: "9af7a299-6a76-452c-854d-d80a082dabf1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.764588 4803 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.765021 4803 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.765140 4803 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.765212 4803 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9af7a299-6a76-452c-854d-d80a082dabf1-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.818166 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-98b9df85f-f5gmm" podUID="fa470512-29ae-4707-abdb-a93dd93f6b58" containerName="console" containerID="cri-o://a2a44aa47f06462db5296bc332114eb143798cd5cc78761f3d8ca741e57e2138" gracePeriod=14 Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.878434 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.989522 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.989508 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9af7a299-6a76-452c-854d-d80a082dabf1","Type":"ContainerDied","Data":"43b61e02b905b7462659a3f6743a8b5efa0aeeeac6cca4330c9659187d460e0d"} Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.989689 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43b61e02b905b7462659a3f6743a8b5efa0aeeeac6cca4330c9659187d460e0d" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.994986 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-98b9df85f-f5gmm_fa470512-29ae-4707-abdb-a93dd93f6b58/console/0.log" Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.995378 4803 generic.go:334] "Generic (PLEG): container finished" podID="fa470512-29ae-4707-abdb-a93dd93f6b58" containerID="a2a44aa47f06462db5296bc332114eb143798cd5cc78761f3d8ca741e57e2138" exitCode=2 Jan 27 23:07:35 crc kubenswrapper[4803]: I0127 23:07:35.995441 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-98b9df85f-f5gmm" event={"ID":"fa470512-29ae-4707-abdb-a93dd93f6b58","Type":"ContainerDied","Data":"a2a44aa47f06462db5296bc332114eb143798cd5cc78761f3d8ca741e57e2138"} Jan 27 23:07:36 crc kubenswrapper[4803]: I0127 23:07:36.000418 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhx4f" event={"ID":"65b25ebb-046c-47af-b45a-5da95b17f7d5","Type":"ContainerStarted","Data":"2b4f810eae0dd9a187d250436b7f3eadc8762ad943575c660257907323089259"} Jan 27 23:07:37 crc kubenswrapper[4803]: I0127 23:07:37.023409 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-98b9df85f-f5gmm_fa470512-29ae-4707-abdb-a93dd93f6b58/console/0.log" Jan 27 23:07:37 crc kubenswrapper[4803]: I0127 23:07:37.023977 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-98b9df85f-f5gmm" event={"ID":"fa470512-29ae-4707-abdb-a93dd93f6b58","Type":"ContainerStarted","Data":"e0f46a838a3142032b742c70ef139ea374d3f9be8bce59f1b7f06c44995b6c97"} Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.729913 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 23:07:38 crc kubenswrapper[4803]: E0127 23:07:38.730697 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af7a299-6a76-452c-854d-d80a082dabf1" containerName="tempest-tests-tempest-tests-runner" Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.730712 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af7a299-6a76-452c-854d-d80a082dabf1" containerName="tempest-tests-tempest-tests-runner" Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.730986 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af7a299-6a76-452c-854d-d80a082dabf1" containerName="tempest-tests-tempest-tests-runner" Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.732066 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.736646 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-r2wvq" Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.744858 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.864163 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.864487 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2kdh\" (UniqueName: \"kubernetes.io/projected/b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8-kube-api-access-f2kdh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.967443 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2kdh\" (UniqueName: \"kubernetes.io/projected/b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8-kube-api-access-f2kdh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.967524 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.968763 4803 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 23:07:38 crc kubenswrapper[4803]: I0127 23:07:38.993905 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2kdh\" (UniqueName: \"kubernetes.io/projected/b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8-kube-api-access-f2kdh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 23:07:39 crc kubenswrapper[4803]: I0127 23:07:39.014207 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 23:07:39 crc kubenswrapper[4803]: I0127 23:07:39.068074 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 23:07:39 crc kubenswrapper[4803]: I0127 23:07:39.602607 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 23:07:39 crc kubenswrapper[4803]: I0127 23:07:39.605581 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 23:07:39 crc kubenswrapper[4803]: I0127 23:07:39.605668 4803 patch_prober.go:28] interesting pod/console-98b9df85f-f5gmm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.140:8443/health\": dial tcp 10.217.0.140:8443: connect: connection refused" start-of-body= Jan 27 23:07:39 crc kubenswrapper[4803]: I0127 23:07:39.605704 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98b9df85f-f5gmm" podUID="fa470512-29ae-4707-abdb-a93dd93f6b58" containerName="console" probeResult="failure" output="Get \"https://10.217.0.140:8443/health\": dial tcp 10.217.0.140:8443: connect: connection refused" Jan 27 23:07:39 crc kubenswrapper[4803]: I0127 23:07:39.667139 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 23:07:39 crc kubenswrapper[4803]: W0127 23:07:39.670318 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb81b69f7_fd9a_45b7_9c1c_89365a2e6ea8.slice/crio-10a127577d3ffad5da1cdcb69183c4a2a3d55edabe16ba1c99a66e5b402cdc60 WatchSource:0}: Error finding container 10a127577d3ffad5da1cdcb69183c4a2a3d55edabe16ba1c99a66e5b402cdc60: Status 404 returned error can't find the container with id 10a127577d3ffad5da1cdcb69183c4a2a3d55edabe16ba1c99a66e5b402cdc60 Jan 27 23:07:40 crc kubenswrapper[4803]: I0127 23:07:40.057058 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8","Type":"ContainerStarted","Data":"10a127577d3ffad5da1cdcb69183c4a2a3d55edabe16ba1c99a66e5b402cdc60"} Jan 27 23:07:40 crc kubenswrapper[4803]: I0127 23:07:40.119768 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 23:07:40 crc kubenswrapper[4803]: I0127 23:07:40.556050 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9crs2" podUID="a5265b8b-6b21-4c52-be79-e6c2a2f94a1d" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:40 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:40 crc kubenswrapper[4803]: > Jan 27 23:07:40 crc kubenswrapper[4803]: I0127 23:07:40.751490 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 23:07:40 crc kubenswrapper[4803]: I0127 23:07:40.824292 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hg2h2" Jan 27 23:07:41 crc kubenswrapper[4803]: I0127 23:07:41.130579 4803 generic.go:334] "Generic (PLEG): container finished" podID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerID="2b4f810eae0dd9a187d250436b7f3eadc8762ad943575c660257907323089259" exitCode=0 Jan 27 23:07:41 crc kubenswrapper[4803]: I0127 23:07:41.131016 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhx4f" event={"ID":"65b25ebb-046c-47af-b45a-5da95b17f7d5","Type":"ContainerDied","Data":"2b4f810eae0dd9a187d250436b7f3eadc8762ad943575c660257907323089259"} Jan 27 23:07:41 crc kubenswrapper[4803]: I0127 23:07:41.636281 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-74nng" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:41 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:41 crc kubenswrapper[4803]: > Jan 27 23:07:42 crc kubenswrapper[4803]: I0127 23:07:42.898955 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9nds5" podUID="f28d4382-79f1-4254-a4fa-fced45178594" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:42 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:42 crc kubenswrapper[4803]: > Jan 27 23:07:43 crc kubenswrapper[4803]: I0127 23:07:43.151151 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhx4f" event={"ID":"65b25ebb-046c-47af-b45a-5da95b17f7d5","Type":"ContainerStarted","Data":"aa8cb287e08f6b2576f9bdbdad9ce4bf1e477197b34a93d0dabe59c49fcea125"} Jan 27 23:07:43 crc kubenswrapper[4803]: I0127 23:07:43.152607 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8","Type":"ContainerStarted","Data":"2f54ed14b53b10606b51626cf2b18c7f23338c40ff6f8748e34a8c79dcc117f3"} Jan 27 23:07:43 crc kubenswrapper[4803]: I0127 23:07:43.174796 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mhx4f" podStartSLOduration=4.229733544 podStartE2EDuration="12.174775388s" podCreationTimestamp="2026-01-27 23:07:31 +0000 UTC" firstStartedPulling="2026-01-27 23:07:33.96722874 +0000 UTC m=+4806.383250439" lastFinishedPulling="2026-01-27 23:07:41.912270584 +0000 UTC m=+4814.328292283" observedRunningTime="2026-01-27 23:07:43.169228569 +0000 UTC m=+4815.585250288" watchObservedRunningTime="2026-01-27 23:07:43.174775388 +0000 UTC m=+4815.590797087" Jan 27 23:07:43 crc kubenswrapper[4803]: I0127 23:07:43.193862 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.9558889710000003 podStartE2EDuration="5.193826323s" podCreationTimestamp="2026-01-27 23:07:38 +0000 UTC" firstStartedPulling="2026-01-27 23:07:39.673977983 +0000 UTC m=+4812.089999672" lastFinishedPulling="2026-01-27 23:07:41.911915315 +0000 UTC m=+4814.327937024" observedRunningTime="2026-01-27 23:07:43.185814327 +0000 UTC m=+4815.601836026" watchObservedRunningTime="2026-01-27 23:07:43.193826323 +0000 UTC m=+4815.609848022" Jan 27 23:07:49 crc kubenswrapper[4803]: I0127 23:07:49.375569 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d65kn" Jan 27 23:07:49 crc kubenswrapper[4803]: I0127 23:07:49.463594 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 23:07:49 crc kubenswrapper[4803]: I0127 23:07:49.534980 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9crs2" Jan 27 23:07:49 crc kubenswrapper[4803]: I0127 23:07:49.614113 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 23:07:49 crc kubenswrapper[4803]: I0127 23:07:49.618997 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-98b9df85f-f5gmm" Jan 27 23:07:50 crc kubenswrapper[4803]: I0127 23:07:50.626938 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:07:50 crc kubenswrapper[4803]: I0127 23:07:50.687390 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.246161 4803 generic.go:334] "Generic (PLEG): container finished" podID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerID="b30cc24cd8eabf1112dea7ca32c85b17e45d6ff38e2c09af245838396b131565" exitCode=137 Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.246258 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerDied","Data":"b30cc24cd8eabf1112dea7ca32c85b17e45d6ff38e2c09af245838396b131565"} Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.246377 4803 scope.go:117] "RemoveContainer" containerID="ec91d42bd8a135d0c614d6ed97e86acfb3222e35f87ebe79744ce38bff5ca16a" Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.762001 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.883482 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74nng"] Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.902326 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9nds5" Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.902464 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-combined-ca-bundle\") pod \"fbed465b-e99e-4ef2-8217-f363bd3ec042\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.902552 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-ceilometer-tls-certs\") pod \"fbed465b-e99e-4ef2-8217-f363bd3ec042\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.902580 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-scripts\") pod \"fbed465b-e99e-4ef2-8217-f363bd3ec042\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.902608 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-run-httpd\") pod \"fbed465b-e99e-4ef2-8217-f363bd3ec042\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.902634 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swmgw\" (UniqueName: \"kubernetes.io/projected/fbed465b-e99e-4ef2-8217-f363bd3ec042-kube-api-access-swmgw\") pod \"fbed465b-e99e-4ef2-8217-f363bd3ec042\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.902726 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-config-data\") pod \"fbed465b-e99e-4ef2-8217-f363bd3ec042\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.902788 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-sg-core-conf-yaml\") pod \"fbed465b-e99e-4ef2-8217-f363bd3ec042\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.902829 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-log-httpd\") pod \"fbed465b-e99e-4ef2-8217-f363bd3ec042\" (UID: \"fbed465b-e99e-4ef2-8217-f363bd3ec042\") " Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.908793 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fbed465b-e99e-4ef2-8217-f363bd3ec042" (UID: "fbed465b-e99e-4ef2-8217-f363bd3ec042"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.909248 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fbed465b-e99e-4ef2-8217-f363bd3ec042" (UID: "fbed465b-e99e-4ef2-8217-f363bd3ec042"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.916617 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-scripts" (OuterVolumeSpecName: "scripts") pod "fbed465b-e99e-4ef2-8217-f363bd3ec042" (UID: "fbed465b-e99e-4ef2-8217-f363bd3ec042"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.935806 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbed465b-e99e-4ef2-8217-f363bd3ec042-kube-api-access-swmgw" (OuterVolumeSpecName: "kube-api-access-swmgw") pod "fbed465b-e99e-4ef2-8217-f363bd3ec042" (UID: "fbed465b-e99e-4ef2-8217-f363bd3ec042"). InnerVolumeSpecName "kube-api-access-swmgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.958904 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fbed465b-e99e-4ef2-8217-f363bd3ec042" (UID: "fbed465b-e99e-4ef2-8217-f363bd3ec042"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.995211 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9nds5" Jan 27 23:07:51 crc kubenswrapper[4803]: I0127 23:07:51.995407 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fbed465b-e99e-4ef2-8217-f363bd3ec042" (UID: "fbed465b-e99e-4ef2-8217-f363bd3ec042"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.005816 4803 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.005861 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.005870 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.005880 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swmgw\" (UniqueName: \"kubernetes.io/projected/fbed465b-e99e-4ef2-8217-f363bd3ec042-kube-api-access-swmgw\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.005889 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.005897 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbed465b-e99e-4ef2-8217-f363bd3ec042-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.149480 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-config-data" (OuterVolumeSpecName: "config-data") pod "fbed465b-e99e-4ef2-8217-f363bd3ec042" (UID: "fbed465b-e99e-4ef2-8217-f363bd3ec042"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.150414 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbed465b-e99e-4ef2-8217-f363bd3ec042" (UID: "fbed465b-e99e-4ef2-8217-f363bd3ec042"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.236662 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.236696 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbed465b-e99e-4ef2-8217-f363bd3ec042-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.239087 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.239137 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.265673 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbed465b-e99e-4ef2-8217-f363bd3ec042","Type":"ContainerDied","Data":"79916fa7dceb5b8492e56d34f3daba340c8c9cba83c453f25b03ccd6c1d897a9"} Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.265741 4803 scope.go:117] "RemoveContainer" containerID="b30cc24cd8eabf1112dea7ca32c85b17e45d6ff38e2c09af245838396b131565" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.265747 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.266091 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-74nng" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="registry-server" containerID="cri-o://a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f" gracePeriod=2 Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.335116 4803 scope.go:117] "RemoveContainer" containerID="2aae4bcf6852b4cdf1ff3ea2493b612c2475445d9f0c50593ef5735371daed0b" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.359390 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.359430 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.382214 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:07:52 crc kubenswrapper[4803]: E0127 23:07:52.382859 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="sg-core" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.382884 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="sg-core" Jan 27 23:07:52 crc kubenswrapper[4803]: E0127 23:07:52.382902 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-central-agent" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.382911 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-central-agent" Jan 27 23:07:52 crc kubenswrapper[4803]: E0127 23:07:52.382924 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="proxy-httpd" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.382931 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="proxy-httpd" Jan 27 23:07:52 crc kubenswrapper[4803]: E0127 23:07:52.382958 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-central-agent" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.382968 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-central-agent" Jan 27 23:07:52 crc kubenswrapper[4803]: E0127 23:07:52.382999 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-notification-agent" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.383008 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-notification-agent" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.383245 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-notification-agent" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.383265 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="proxy-httpd" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.383277 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-central-agent" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.383286 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="ceilometer-central-agent" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.383298 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" containerName="sg-core" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.385791 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.386250 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.388840 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.389079 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.389411 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.400712 4803 scope.go:117] "RemoveContainer" containerID="f6f817e6c8bdd38c60e602da2e5dd27bd3562ef47bd8954e48d1815a4be45144" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.443592 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.443707 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-scripts\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.443835 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm7pr\" (UniqueName: \"kubernetes.io/projected/a133717e-ae46-450e-b3ae-292103d98bbe-kube-api-access-jm7pr\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.443931 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.444034 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.444090 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-run-httpd\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.444143 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-config-data\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.444159 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-log-httpd\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.473750 4803 scope.go:117] "RemoveContainer" containerID="b6e702a4a9cc100b2b9d048d40c8a324a207b7103d2498fa6d532ec86613d573" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.547694 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.547795 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.547868 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-run-httpd\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.547922 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-config-data\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.547961 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-log-httpd\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.548022 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.548071 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-scripts\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.548144 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm7pr\" (UniqueName: \"kubernetes.io/projected/a133717e-ae46-450e-b3ae-292103d98bbe-kube-api-access-jm7pr\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.550223 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-log-httpd\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.552744 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-run-httpd\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.554012 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.554912 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-scripts\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.564807 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.567125 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-config-data\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.568228 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.577110 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm7pr\" (UniqueName: \"kubernetes.io/projected/a133717e-ae46-450e-b3ae-292103d98bbe-kube-api-access-jm7pr\") pod \"ceilometer-0\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.767942 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 23:07:52 crc kubenswrapper[4803]: I0127 23:07:52.934458 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.060175 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-utilities\") pod \"654b6723-6b6d-41ac-92fe-f097f87735a4\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.060243 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbzs8\" (UniqueName: \"kubernetes.io/projected/654b6723-6b6d-41ac-92fe-f097f87735a4-kube-api-access-pbzs8\") pod \"654b6723-6b6d-41ac-92fe-f097f87735a4\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.060415 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-catalog-content\") pod \"654b6723-6b6d-41ac-92fe-f097f87735a4\" (UID: \"654b6723-6b6d-41ac-92fe-f097f87735a4\") " Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.063318 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-utilities" (OuterVolumeSpecName: "utilities") pod "654b6723-6b6d-41ac-92fe-f097f87735a4" (UID: "654b6723-6b6d-41ac-92fe-f097f87735a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.068400 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654b6723-6b6d-41ac-92fe-f097f87735a4-kube-api-access-pbzs8" (OuterVolumeSpecName: "kube-api-access-pbzs8") pod "654b6723-6b6d-41ac-92fe-f097f87735a4" (UID: "654b6723-6b6d-41ac-92fe-f097f87735a4"). InnerVolumeSpecName "kube-api-access-pbzs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.147832 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "654b6723-6b6d-41ac-92fe-f097f87735a4" (UID: "654b6723-6b6d-41ac-92fe-f097f87735a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.163831 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.164121 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbzs8\" (UniqueName: \"kubernetes.io/projected/654b6723-6b6d-41ac-92fe-f097f87735a4-kube-api-access-pbzs8\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.164194 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654b6723-6b6d-41ac-92fe-f097f87735a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.288881 4803 generic.go:334] "Generic (PLEG): container finished" podID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerID="a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f" exitCode=0 Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.289053 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74nng" event={"ID":"654b6723-6b6d-41ac-92fe-f097f87735a4","Type":"ContainerDied","Data":"a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f"} Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.289392 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74nng" event={"ID":"654b6723-6b6d-41ac-92fe-f097f87735a4","Type":"ContainerDied","Data":"3a66ddee2d12b2109090716d81a7a83113e8f28f5ed77a583a8635e38f686d77"} Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.289418 4803 scope.go:117] "RemoveContainer" containerID="a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.289084 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74nng" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.300423 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mhx4f" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerName="registry-server" probeResult="failure" output=< Jan 27 23:07:53 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:07:53 crc kubenswrapper[4803]: > Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.323510 4803 scope.go:117] "RemoveContainer" containerID="a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.324682 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:07:53 crc kubenswrapper[4803]: W0127 23:07:53.330105 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda133717e_ae46_450e_b3ae_292103d98bbe.slice/crio-cf8ffc15939e91ccf8c13f3c1bad6415618f17a109700e353f8b78291c9df2df WatchSource:0}: Error finding container cf8ffc15939e91ccf8c13f3c1bad6415618f17a109700e353f8b78291c9df2df: Status 404 returned error can't find the container with id cf8ffc15939e91ccf8c13f3c1bad6415618f17a109700e353f8b78291c9df2df Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.365621 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74nng"] Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.377930 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-74nng"] Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.423293 4803 scope.go:117] "RemoveContainer" containerID="0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.464945 4803 scope.go:117] "RemoveContainer" containerID="a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f" Jan 27 23:07:53 crc kubenswrapper[4803]: E0127 23:07:53.466922 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f\": container with ID starting with a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f not found: ID does not exist" containerID="a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.467005 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f"} err="failed to get container status \"a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f\": rpc error: code = NotFound desc = could not find container \"a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f\": container with ID starting with a6b7c04b04ce3c590238d673979c06cc2893879861fe7de7fd4122051abb563f not found: ID does not exist" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.467040 4803 scope.go:117] "RemoveContainer" containerID="a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848" Jan 27 23:07:53 crc kubenswrapper[4803]: E0127 23:07:53.467451 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848\": container with ID starting with a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848 not found: ID does not exist" containerID="a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.467605 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848"} err="failed to get container status \"a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848\": rpc error: code = NotFound desc = could not find container \"a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848\": container with ID starting with a464a26c6be17fdbf8bdbad06b46576b0c9c8b228bd87f97297c18b6a4a22848 not found: ID does not exist" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.467628 4803 scope.go:117] "RemoveContainer" containerID="0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d" Jan 27 23:07:53 crc kubenswrapper[4803]: E0127 23:07:53.473669 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d\": container with ID starting with 0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d not found: ID does not exist" containerID="0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d" Jan 27 23:07:53 crc kubenswrapper[4803]: I0127 23:07:53.473737 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d"} err="failed to get container status \"0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d\": rpc error: code = NotFound desc = could not find container \"0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d\": container with ID starting with 0d17e959fc2aaf9a7cc58acaeed6ae36d75832d2272bdbc50da1433b6d23c02d not found: ID does not exist" Jan 27 23:07:54 crc kubenswrapper[4803]: I0127 23:07:54.107145 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:07:54 crc kubenswrapper[4803]: I0127 23:07:54.254486 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-848cc4d96f-sx8xb" Jan 27 23:07:54 crc kubenswrapper[4803]: I0127 23:07:54.351114 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" path="/var/lib/kubelet/pods/654b6723-6b6d-41ac-92fe-f097f87735a4/volumes" Jan 27 23:07:54 crc kubenswrapper[4803]: I0127 23:07:54.354620 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbed465b-e99e-4ef2-8217-f363bd3ec042" path="/var/lib/kubelet/pods/fbed465b-e99e-4ef2-8217-f363bd3ec042/volumes" Jan 27 23:07:54 crc kubenswrapper[4803]: I0127 23:07:54.372505 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a133717e-ae46-450e-b3ae-292103d98bbe","Type":"ContainerStarted","Data":"2cdf786371f4591821183edd50cfea66204a3fdb537223f2a2b2ad26423e6860"} Jan 27 23:07:54 crc kubenswrapper[4803]: I0127 23:07:54.372558 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a133717e-ae46-450e-b3ae-292103d98bbe","Type":"ContainerStarted","Data":"cf8ffc15939e91ccf8c13f3c1bad6415618f17a109700e353f8b78291c9df2df"} Jan 27 23:07:55 crc kubenswrapper[4803]: I0127 23:07:55.394142 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a133717e-ae46-450e-b3ae-292103d98bbe","Type":"ContainerStarted","Data":"8843a34c119d9af8356c166026940df0ffa79b997efc254c2751e625f08fce31"} Jan 27 23:07:56 crc kubenswrapper[4803]: I0127 23:07:56.410508 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a133717e-ae46-450e-b3ae-292103d98bbe","Type":"ContainerStarted","Data":"099a65eaf56c2e4d4282640b764c7b4a4d6b0ccfc7b903a3c5a77996cae06be0"} Jan 27 23:07:57 crc kubenswrapper[4803]: I0127 23:07:57.423199 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a133717e-ae46-450e-b3ae-292103d98bbe","Type":"ContainerStarted","Data":"022281cd3c3e4de8185e0e75651436b5fcfa447b0bd41b759e0d0c8a853bc79d"} Jan 27 23:07:57 crc kubenswrapper[4803]: I0127 23:07:57.424013 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="ceilometer-central-agent" containerID="cri-o://2cdf786371f4591821183edd50cfea66204a3fdb537223f2a2b2ad26423e6860" gracePeriod=30 Jan 27 23:07:57 crc kubenswrapper[4803]: I0127 23:07:57.424340 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 23:07:57 crc kubenswrapper[4803]: I0127 23:07:57.424711 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="proxy-httpd" containerID="cri-o://022281cd3c3e4de8185e0e75651436b5fcfa447b0bd41b759e0d0c8a853bc79d" gracePeriod=30 Jan 27 23:07:57 crc kubenswrapper[4803]: I0127 23:07:57.424759 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="sg-core" containerID="cri-o://099a65eaf56c2e4d4282640b764c7b4a4d6b0ccfc7b903a3c5a77996cae06be0" gracePeriod=30 Jan 27 23:07:57 crc kubenswrapper[4803]: I0127 23:07:57.424794 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="ceilometer-notification-agent" containerID="cri-o://8843a34c119d9af8356c166026940df0ffa79b997efc254c2751e625f08fce31" gracePeriod=30 Jan 27 23:07:57 crc kubenswrapper[4803]: I0127 23:07:57.449051 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.929447885 podStartE2EDuration="5.449035439s" podCreationTimestamp="2026-01-27 23:07:52 +0000 UTC" firstStartedPulling="2026-01-27 23:07:53.334792846 +0000 UTC m=+4825.750814545" lastFinishedPulling="2026-01-27 23:07:56.8543804 +0000 UTC m=+4829.270402099" observedRunningTime="2026-01-27 23:07:57.448265788 +0000 UTC m=+4829.864287487" watchObservedRunningTime="2026-01-27 23:07:57.449035439 +0000 UTC m=+4829.865057138" Jan 27 23:07:58 crc kubenswrapper[4803]: I0127 23:07:58.438661 4803 generic.go:334] "Generic (PLEG): container finished" podID="a133717e-ae46-450e-b3ae-292103d98bbe" containerID="099a65eaf56c2e4d4282640b764c7b4a4d6b0ccfc7b903a3c5a77996cae06be0" exitCode=2 Jan 27 23:07:58 crc kubenswrapper[4803]: I0127 23:07:58.438774 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a133717e-ae46-450e-b3ae-292103d98bbe","Type":"ContainerDied","Data":"099a65eaf56c2e4d4282640b764c7b4a4d6b0ccfc7b903a3c5a77996cae06be0"} Jan 27 23:08:02 crc kubenswrapper[4803]: I0127 23:08:02.484435 4803 generic.go:334] "Generic (PLEG): container finished" podID="a133717e-ae46-450e-b3ae-292103d98bbe" containerID="2cdf786371f4591821183edd50cfea66204a3fdb537223f2a2b2ad26423e6860" exitCode=0 Jan 27 23:08:02 crc kubenswrapper[4803]: I0127 23:08:02.485060 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a133717e-ae46-450e-b3ae-292103d98bbe","Type":"ContainerDied","Data":"2cdf786371f4591821183edd50cfea66204a3fdb537223f2a2b2ad26423e6860"} Jan 27 23:08:03 crc kubenswrapper[4803]: I0127 23:08:03.292633 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mhx4f" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerName="registry-server" probeResult="failure" output=< Jan 27 23:08:03 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:08:03 crc kubenswrapper[4803]: > Jan 27 23:08:08 crc kubenswrapper[4803]: I0127 23:08:08.183060 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 23:08:13 crc kubenswrapper[4803]: I0127 23:08:13.293099 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mhx4f" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerName="registry-server" probeResult="failure" output=< Jan 27 23:08:13 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:08:13 crc kubenswrapper[4803]: > Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.354813 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.454243 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.492964 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7kj6c/must-gather-f8th5"] Jan 27 23:08:22 crc kubenswrapper[4803]: E0127 23:08:22.493617 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="registry-server" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.493643 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="registry-server" Jan 27 23:08:22 crc kubenswrapper[4803]: E0127 23:08:22.493698 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="extract-content" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.493707 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="extract-content" Jan 27 23:08:22 crc kubenswrapper[4803]: E0127 23:08:22.493725 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="extract-utilities" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.493733 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="extract-utilities" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.494049 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="654b6723-6b6d-41ac-92fe-f097f87735a4" containerName="registry-server" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.495734 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/must-gather-f8th5" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.498505 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7kj6c"/"default-dockercfg-t2lbd" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.500807 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7kj6c"/"kube-root-ca.crt" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.500837 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7kj6c"/"openshift-service-ca.crt" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.522449 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7kj6c/must-gather-f8th5"] Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.626975 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mhx4f"] Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.628737 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmmfs\" (UniqueName: \"kubernetes.io/projected/a4215496-c9dc-41d2-a133-042eb98a0820-kube-api-access-nmmfs\") pod \"must-gather-f8th5\" (UID: \"a4215496-c9dc-41d2-a133-042eb98a0820\") " pod="openshift-must-gather-7kj6c/must-gather-f8th5" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.628883 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4215496-c9dc-41d2-a133-042eb98a0820-must-gather-output\") pod \"must-gather-f8th5\" (UID: \"a4215496-c9dc-41d2-a133-042eb98a0820\") " pod="openshift-must-gather-7kj6c/must-gather-f8th5" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.731473 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmmfs\" (UniqueName: \"kubernetes.io/projected/a4215496-c9dc-41d2-a133-042eb98a0820-kube-api-access-nmmfs\") pod \"must-gather-f8th5\" (UID: \"a4215496-c9dc-41d2-a133-042eb98a0820\") " pod="openshift-must-gather-7kj6c/must-gather-f8th5" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.731599 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4215496-c9dc-41d2-a133-042eb98a0820-must-gather-output\") pod \"must-gather-f8th5\" (UID: \"a4215496-c9dc-41d2-a133-042eb98a0820\") " pod="openshift-must-gather-7kj6c/must-gather-f8th5" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.732185 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4215496-c9dc-41d2-a133-042eb98a0820-must-gather-output\") pod \"must-gather-f8th5\" (UID: \"a4215496-c9dc-41d2-a133-042eb98a0820\") " pod="openshift-must-gather-7kj6c/must-gather-f8th5" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.760528 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmmfs\" (UniqueName: \"kubernetes.io/projected/a4215496-c9dc-41d2-a133-042eb98a0820-kube-api-access-nmmfs\") pod \"must-gather-f8th5\" (UID: \"a4215496-c9dc-41d2-a133-042eb98a0820\") " pod="openshift-must-gather-7kj6c/must-gather-f8th5" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.784767 4803 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 23:08:22 crc kubenswrapper[4803]: I0127 23:08:22.819924 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/must-gather-f8th5" Jan 27 23:08:23 crc kubenswrapper[4803]: I0127 23:08:23.711698 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mhx4f" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerName="registry-server" containerID="cri-o://aa8cb287e08f6b2576f9bdbdad9ce4bf1e477197b34a93d0dabe59c49fcea125" gracePeriod=2 Jan 27 23:08:23 crc kubenswrapper[4803]: I0127 23:08:23.982740 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7kj6c/must-gather-f8th5"] Jan 27 23:08:24 crc kubenswrapper[4803]: I0127 23:08:24.723476 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/must-gather-f8th5" event={"ID":"a4215496-c9dc-41d2-a133-042eb98a0820","Type":"ContainerStarted","Data":"3e86b91c100c1aff1e2e2ebb2fdc5ca81d15819ee69f84c09a0cd5152f3f142c"} Jan 27 23:08:24 crc kubenswrapper[4803]: I0127 23:08:24.726548 4803 generic.go:334] "Generic (PLEG): container finished" podID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerID="aa8cb287e08f6b2576f9bdbdad9ce4bf1e477197b34a93d0dabe59c49fcea125" exitCode=0 Jan 27 23:08:24 crc kubenswrapper[4803]: I0127 23:08:24.726586 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhx4f" event={"ID":"65b25ebb-046c-47af-b45a-5da95b17f7d5","Type":"ContainerDied","Data":"aa8cb287e08f6b2576f9bdbdad9ce4bf1e477197b34a93d0dabe59c49fcea125"} Jan 27 23:08:24 crc kubenswrapper[4803]: I0127 23:08:24.867709 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:08:24 crc kubenswrapper[4803]: I0127 23:08:24.987281 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-catalog-content\") pod \"65b25ebb-046c-47af-b45a-5da95b17f7d5\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " Jan 27 23:08:24 crc kubenswrapper[4803]: I0127 23:08:24.987373 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm99m\" (UniqueName: \"kubernetes.io/projected/65b25ebb-046c-47af-b45a-5da95b17f7d5-kube-api-access-bm99m\") pod \"65b25ebb-046c-47af-b45a-5da95b17f7d5\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " Jan 27 23:08:24 crc kubenswrapper[4803]: I0127 23:08:24.987430 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-utilities\") pod \"65b25ebb-046c-47af-b45a-5da95b17f7d5\" (UID: \"65b25ebb-046c-47af-b45a-5da95b17f7d5\") " Jan 27 23:08:24 crc kubenswrapper[4803]: I0127 23:08:24.989318 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-utilities" (OuterVolumeSpecName: "utilities") pod "65b25ebb-046c-47af-b45a-5da95b17f7d5" (UID: "65b25ebb-046c-47af-b45a-5da95b17f7d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.084451 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b25ebb-046c-47af-b45a-5da95b17f7d5-kube-api-access-bm99m" (OuterVolumeSpecName: "kube-api-access-bm99m") pod "65b25ebb-046c-47af-b45a-5da95b17f7d5" (UID: "65b25ebb-046c-47af-b45a-5da95b17f7d5"). InnerVolumeSpecName "kube-api-access-bm99m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.090527 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm99m\" (UniqueName: \"kubernetes.io/projected/65b25ebb-046c-47af-b45a-5da95b17f7d5-kube-api-access-bm99m\") on node \"crc\" DevicePath \"\"" Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.090572 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.129663 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65b25ebb-046c-47af-b45a-5da95b17f7d5" (UID: "65b25ebb-046c-47af-b45a-5da95b17f7d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.193529 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b25ebb-046c-47af-b45a-5da95b17f7d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.739416 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhx4f" event={"ID":"65b25ebb-046c-47af-b45a-5da95b17f7d5","Type":"ContainerDied","Data":"ce635517d275b14c3b807d884f3fb09e57f273f480d7d174eca13d8b692e15d3"} Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.739484 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhx4f" Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.739761 4803 scope.go:117] "RemoveContainer" containerID="aa8cb287e08f6b2576f9bdbdad9ce4bf1e477197b34a93d0dabe59c49fcea125" Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.780090 4803 scope.go:117] "RemoveContainer" containerID="2b4f810eae0dd9a187d250436b7f3eadc8762ad943575c660257907323089259" Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.792173 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mhx4f"] Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.808124 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mhx4f"] Jan 27 23:08:25 crc kubenswrapper[4803]: I0127 23:08:25.813692 4803 scope.go:117] "RemoveContainer" containerID="a396bfe8c544523c53ca88b63377103055eba2f125c18f870ac77604928df612" Jan 27 23:08:26 crc kubenswrapper[4803]: I0127 23:08:26.322276 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" path="/var/lib/kubelet/pods/65b25ebb-046c-47af-b45a-5da95b17f7d5/volumes" Jan 27 23:08:27 crc kubenswrapper[4803]: I0127 23:08:27.769897 4803 generic.go:334] "Generic (PLEG): container finished" podID="a133717e-ae46-450e-b3ae-292103d98bbe" containerID="022281cd3c3e4de8185e0e75651436b5fcfa447b0bd41b759e0d0c8a853bc79d" exitCode=137 Jan 27 23:08:27 crc kubenswrapper[4803]: I0127 23:08:27.770201 4803 generic.go:334] "Generic (PLEG): container finished" podID="a133717e-ae46-450e-b3ae-292103d98bbe" containerID="8843a34c119d9af8356c166026940df0ffa79b997efc254c2751e625f08fce31" exitCode=137 Jan 27 23:08:27 crc kubenswrapper[4803]: I0127 23:08:27.769995 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a133717e-ae46-450e-b3ae-292103d98bbe","Type":"ContainerDied","Data":"022281cd3c3e4de8185e0e75651436b5fcfa447b0bd41b759e0d0c8a853bc79d"} Jan 27 23:08:27 crc kubenswrapper[4803]: I0127 23:08:27.770245 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a133717e-ae46-450e-b3ae-292103d98bbe","Type":"ContainerDied","Data":"8843a34c119d9af8356c166026940df0ffa79b997efc254c2751e625f08fce31"} Jan 27 23:08:32 crc kubenswrapper[4803]: I0127 23:08:32.832535 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/must-gather-f8th5" event={"ID":"a4215496-c9dc-41d2-a133-042eb98a0820","Type":"ContainerStarted","Data":"f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06"} Jan 27 23:08:32 crc kubenswrapper[4803]: I0127 23:08:32.981719 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.111232 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-config-data\") pod \"a133717e-ae46-450e-b3ae-292103d98bbe\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.111559 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm7pr\" (UniqueName: \"kubernetes.io/projected/a133717e-ae46-450e-b3ae-292103d98bbe-kube-api-access-jm7pr\") pod \"a133717e-ae46-450e-b3ae-292103d98bbe\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.111604 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-log-httpd\") pod \"a133717e-ae46-450e-b3ae-292103d98bbe\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.111644 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-scripts\") pod \"a133717e-ae46-450e-b3ae-292103d98bbe\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.111658 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-run-httpd\") pod \"a133717e-ae46-450e-b3ae-292103d98bbe\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.111753 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-sg-core-conf-yaml\") pod \"a133717e-ae46-450e-b3ae-292103d98bbe\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.111808 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-ceilometer-tls-certs\") pod \"a133717e-ae46-450e-b3ae-292103d98bbe\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.111885 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-combined-ca-bundle\") pod \"a133717e-ae46-450e-b3ae-292103d98bbe\" (UID: \"a133717e-ae46-450e-b3ae-292103d98bbe\") " Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.113003 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a133717e-ae46-450e-b3ae-292103d98bbe" (UID: "a133717e-ae46-450e-b3ae-292103d98bbe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.113302 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a133717e-ae46-450e-b3ae-292103d98bbe" (UID: "a133717e-ae46-450e-b3ae-292103d98bbe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.116993 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-scripts" (OuterVolumeSpecName: "scripts") pod "a133717e-ae46-450e-b3ae-292103d98bbe" (UID: "a133717e-ae46-450e-b3ae-292103d98bbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.117569 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a133717e-ae46-450e-b3ae-292103d98bbe-kube-api-access-jm7pr" (OuterVolumeSpecName: "kube-api-access-jm7pr") pod "a133717e-ae46-450e-b3ae-292103d98bbe" (UID: "a133717e-ae46-450e-b3ae-292103d98bbe"). InnerVolumeSpecName "kube-api-access-jm7pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.150700 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a133717e-ae46-450e-b3ae-292103d98bbe" (UID: "a133717e-ae46-450e-b3ae-292103d98bbe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.179385 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a133717e-ae46-450e-b3ae-292103d98bbe" (UID: "a133717e-ae46-450e-b3ae-292103d98bbe"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.207042 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a133717e-ae46-450e-b3ae-292103d98bbe" (UID: "a133717e-ae46-450e-b3ae-292103d98bbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.215112 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm7pr\" (UniqueName: \"kubernetes.io/projected/a133717e-ae46-450e-b3ae-292103d98bbe-kube-api-access-jm7pr\") on node \"crc\" DevicePath \"\"" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.215141 4803 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.215151 4803 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.215160 4803 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a133717e-ae46-450e-b3ae-292103d98bbe-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.215170 4803 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.215180 4803 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.215188 4803 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.237230 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-config-data" (OuterVolumeSpecName: "config-data") pod "a133717e-ae46-450e-b3ae-292103d98bbe" (UID: "a133717e-ae46-450e-b3ae-292103d98bbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.317105 4803 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a133717e-ae46-450e-b3ae-292103d98bbe-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.844369 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/must-gather-f8th5" event={"ID":"a4215496-c9dc-41d2-a133-042eb98a0820","Type":"ContainerStarted","Data":"2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f"} Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.847256 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a133717e-ae46-450e-b3ae-292103d98bbe","Type":"ContainerDied","Data":"cf8ffc15939e91ccf8c13f3c1bad6415618f17a109700e353f8b78291c9df2df"} Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.847320 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.847329 4803 scope.go:117] "RemoveContainer" containerID="022281cd3c3e4de8185e0e75651436b5fcfa447b0bd41b759e0d0c8a853bc79d" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.879522 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7kj6c/must-gather-f8th5" podStartSLOduration=3.839286931 podStartE2EDuration="11.879498459s" podCreationTimestamp="2026-01-27 23:08:22 +0000 UTC" firstStartedPulling="2026-01-27 23:08:24.503684871 +0000 UTC m=+4856.919706570" lastFinishedPulling="2026-01-27 23:08:32.543896399 +0000 UTC m=+4864.959918098" observedRunningTime="2026-01-27 23:08:33.862120079 +0000 UTC m=+4866.278141808" watchObservedRunningTime="2026-01-27 23:08:33.879498459 +0000 UTC m=+4866.295520168" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.880095 4803 scope.go:117] "RemoveContainer" containerID="099a65eaf56c2e4d4282640b764c7b4a4d6b0ccfc7b903a3c5a77996cae06be0" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.899493 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.905357 4803 scope.go:117] "RemoveContainer" containerID="8843a34c119d9af8356c166026940df0ffa79b997efc254c2751e625f08fce31" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.918987 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.938686 4803 scope.go:117] "RemoveContainer" containerID="2cdf786371f4591821183edd50cfea66204a3fdb537223f2a2b2ad26423e6860" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.942107 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:08:33 crc kubenswrapper[4803]: E0127 23:08:33.942702 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="ceilometer-notification-agent" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.942719 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="ceilometer-notification-agent" Jan 27 23:08:33 crc kubenswrapper[4803]: E0127 23:08:33.942744 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerName="registry-server" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.942752 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerName="registry-server" Jan 27 23:08:33 crc kubenswrapper[4803]: E0127 23:08:33.942760 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="ceilometer-central-agent" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.942766 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="ceilometer-central-agent" Jan 27 23:08:33 crc kubenswrapper[4803]: E0127 23:08:33.942781 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="sg-core" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.942788 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="sg-core" Jan 27 23:08:33 crc kubenswrapper[4803]: E0127 23:08:33.942800 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerName="extract-content" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.942806 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerName="extract-content" Jan 27 23:08:33 crc kubenswrapper[4803]: E0127 23:08:33.942838 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerName="extract-utilities" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.942858 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerName="extract-utilities" Jan 27 23:08:33 crc kubenswrapper[4803]: E0127 23:08:33.942869 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="proxy-httpd" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.942875 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="proxy-httpd" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.943102 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="proxy-httpd" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.943116 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="ceilometer-central-agent" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.943137 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="ceilometer-notification-agent" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.943145 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" containerName="sg-core" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.943160 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b25ebb-046c-47af-b45a-5da95b17f7d5" containerName="registry-server" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.945335 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.947599 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.947816 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.949022 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 23:08:33 crc kubenswrapper[4803]: I0127 23:08:33.974567 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.032668 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.032712 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-config-data\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.032768 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b618bdc8-5c20-45d8-be2e-e7c1379fa992-run-httpd\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.032802 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.032900 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b7dg\" (UniqueName: \"kubernetes.io/projected/b618bdc8-5c20-45d8-be2e-e7c1379fa992-kube-api-access-6b7dg\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.032965 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b618bdc8-5c20-45d8-be2e-e7c1379fa992-log-httpd\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.033025 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-scripts\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.033127 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.134926 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-config-data\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.134997 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b618bdc8-5c20-45d8-be2e-e7c1379fa992-run-httpd\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.135025 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.135100 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b7dg\" (UniqueName: \"kubernetes.io/projected/b618bdc8-5c20-45d8-be2e-e7c1379fa992-kube-api-access-6b7dg\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.135162 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b618bdc8-5c20-45d8-be2e-e7c1379fa992-log-httpd\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.135192 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-scripts\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.135254 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.135280 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.136307 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b618bdc8-5c20-45d8-be2e-e7c1379fa992-run-httpd\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.136550 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b618bdc8-5c20-45d8-be2e-e7c1379fa992-log-httpd\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.142196 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.142597 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.144584 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-scripts\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.148314 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.153297 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b618bdc8-5c20-45d8-be2e-e7c1379fa992-config-data\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.155205 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b7dg\" (UniqueName: \"kubernetes.io/projected/b618bdc8-5c20-45d8-be2e-e7c1379fa992-kube-api-access-6b7dg\") pod \"ceilometer-0\" (UID: \"b618bdc8-5c20-45d8-be2e-e7c1379fa992\") " pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.269409 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.323161 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a133717e-ae46-450e-b3ae-292103d98bbe" path="/var/lib/kubelet/pods/a133717e-ae46-450e-b3ae-292103d98bbe/volumes" Jan 27 23:08:34 crc kubenswrapper[4803]: I0127 23:08:34.891800 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 23:08:34 crc kubenswrapper[4803]: W0127 23:08:34.903358 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb618bdc8_5c20_45d8_be2e_e7c1379fa992.slice/crio-47fdcb00643465c000ea4f7798c13c323ebafb72df1dc5e7ecd96c49851f5e5e WatchSource:0}: Error finding container 47fdcb00643465c000ea4f7798c13c323ebafb72df1dc5e7ecd96c49851f5e5e: Status 404 returned error can't find the container with id 47fdcb00643465c000ea4f7798c13c323ebafb72df1dc5e7ecd96c49851f5e5e Jan 27 23:08:35 crc kubenswrapper[4803]: I0127 23:08:35.874422 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b618bdc8-5c20-45d8-be2e-e7c1379fa992","Type":"ContainerStarted","Data":"d1902621636d338c50cfe545573573fe438f6499235c0b1b477415054e4a6a23"} Jan 27 23:08:35 crc kubenswrapper[4803]: I0127 23:08:35.874805 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b618bdc8-5c20-45d8-be2e-e7c1379fa992","Type":"ContainerStarted","Data":"47fdcb00643465c000ea4f7798c13c323ebafb72df1dc5e7ecd96c49851f5e5e"} Jan 27 23:08:36 crc kubenswrapper[4803]: I0127 23:08:36.887006 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b618bdc8-5c20-45d8-be2e-e7c1379fa992","Type":"ContainerStarted","Data":"14a95ebb7c71857245c3de7de741e04a8dc3e0108cb7333ffdfcdb1178da9de6"} Jan 27 23:08:36 crc kubenswrapper[4803]: I0127 23:08:36.887555 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b618bdc8-5c20-45d8-be2e-e7c1379fa992","Type":"ContainerStarted","Data":"e78a206ccdde8034af7e721646b12b33952ca9cfc684eb996f040ce3336b8f6a"} Jan 27 23:08:37 crc kubenswrapper[4803]: E0127 23:08:37.321673 4803 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.194:43492->38.102.83.194:35783: write tcp 38.102.83.194:43492->38.102.83.194:35783: write: broken pipe Jan 27 23:08:38 crc kubenswrapper[4803]: I0127 23:08:38.917044 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7kj6c/crc-debug-zrlbk"] Jan 27 23:08:38 crc kubenswrapper[4803]: I0127 23:08:38.920135 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" Jan 27 23:08:38 crc kubenswrapper[4803]: I0127 23:08:38.943264 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b618bdc8-5c20-45d8-be2e-e7c1379fa992","Type":"ContainerStarted","Data":"b2ff874966cb66b98bc3c33308d75e43d08106c22f9321991d0e9a1675c7e54d"} Jan 27 23:08:38 crc kubenswrapper[4803]: I0127 23:08:38.944365 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 23:08:38 crc kubenswrapper[4803]: I0127 23:08:38.969815 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9280488140000003 podStartE2EDuration="5.969792305s" podCreationTimestamp="2026-01-27 23:08:33 +0000 UTC" firstStartedPulling="2026-01-27 23:08:34.906002567 +0000 UTC m=+4867.322024266" lastFinishedPulling="2026-01-27 23:08:37.947746048 +0000 UTC m=+4870.363767757" observedRunningTime="2026-01-27 23:08:38.968241784 +0000 UTC m=+4871.384263503" watchObservedRunningTime="2026-01-27 23:08:38.969792305 +0000 UTC m=+4871.385814014" Jan 27 23:08:39 crc kubenswrapper[4803]: I0127 23:08:39.071136 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41eec4af-3de4-482c-8399-28161630e7d3-host\") pod \"crc-debug-zrlbk\" (UID: \"41eec4af-3de4-482c-8399-28161630e7d3\") " pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" Jan 27 23:08:39 crc kubenswrapper[4803]: I0127 23:08:39.071208 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rht\" (UniqueName: \"kubernetes.io/projected/41eec4af-3de4-482c-8399-28161630e7d3-kube-api-access-p4rht\") pod \"crc-debug-zrlbk\" (UID: \"41eec4af-3de4-482c-8399-28161630e7d3\") " pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" Jan 27 23:08:39 crc kubenswrapper[4803]: I0127 23:08:39.173931 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41eec4af-3de4-482c-8399-28161630e7d3-host\") pod \"crc-debug-zrlbk\" (UID: \"41eec4af-3de4-482c-8399-28161630e7d3\") " pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" Jan 27 23:08:39 crc kubenswrapper[4803]: I0127 23:08:39.174007 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rht\" (UniqueName: \"kubernetes.io/projected/41eec4af-3de4-482c-8399-28161630e7d3-kube-api-access-p4rht\") pod \"crc-debug-zrlbk\" (UID: \"41eec4af-3de4-482c-8399-28161630e7d3\") " pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" Jan 27 23:08:39 crc kubenswrapper[4803]: I0127 23:08:39.174922 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41eec4af-3de4-482c-8399-28161630e7d3-host\") pod \"crc-debug-zrlbk\" (UID: \"41eec4af-3de4-482c-8399-28161630e7d3\") " pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" Jan 27 23:08:39 crc kubenswrapper[4803]: I0127 23:08:39.193462 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rht\" (UniqueName: \"kubernetes.io/projected/41eec4af-3de4-482c-8399-28161630e7d3-kube-api-access-p4rht\") pod \"crc-debug-zrlbk\" (UID: \"41eec4af-3de4-482c-8399-28161630e7d3\") " pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" Jan 27 23:08:39 crc kubenswrapper[4803]: I0127 23:08:39.243570 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" Jan 27 23:08:39 crc kubenswrapper[4803]: W0127 23:08:39.281677 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41eec4af_3de4_482c_8399_28161630e7d3.slice/crio-5d13d18fe26c0f131f1dacca51a6e4c7505135d6691487ec7f031758c65de01a WatchSource:0}: Error finding container 5d13d18fe26c0f131f1dacca51a6e4c7505135d6691487ec7f031758c65de01a: Status 404 returned error can't find the container with id 5d13d18fe26c0f131f1dacca51a6e4c7505135d6691487ec7f031758c65de01a Jan 27 23:08:39 crc kubenswrapper[4803]: I0127 23:08:39.954422 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" event={"ID":"41eec4af-3de4-482c-8399-28161630e7d3","Type":"ContainerStarted","Data":"5d13d18fe26c0f131f1dacca51a6e4c7505135d6691487ec7f031758c65de01a"} Jan 27 23:08:51 crc kubenswrapper[4803]: I0127 23:08:51.074066 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" event={"ID":"41eec4af-3de4-482c-8399-28161630e7d3","Type":"ContainerStarted","Data":"5c72cb49946dff3165fd85fbcddca8d5ad78525d88ca3cfca12881ea4913827a"} Jan 27 23:08:51 crc kubenswrapper[4803]: I0127 23:08:51.090316 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" podStartSLOduration=1.868013427 podStartE2EDuration="13.090294908s" podCreationTimestamp="2026-01-27 23:08:38 +0000 UTC" firstStartedPulling="2026-01-27 23:08:39.283635456 +0000 UTC m=+4871.699657155" lastFinishedPulling="2026-01-27 23:08:50.505916917 +0000 UTC m=+4882.921938636" observedRunningTime="2026-01-27 23:08:51.089376573 +0000 UTC m=+4883.505398292" watchObservedRunningTime="2026-01-27 23:08:51.090294908 +0000 UTC m=+4883.506316607" Jan 27 23:09:04 crc kubenswrapper[4803]: I0127 23:09:04.288423 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 23:09:36 crc kubenswrapper[4803]: I0127 23:09:36.580893 4803 generic.go:334] "Generic (PLEG): container finished" podID="f978ff10-12ad-4883-98d9-7ce831fad147" containerID="01d358f5c285efb0d85a58dc84fe3ddf3c305b211f25861b4e7f911bf4fbca0f" exitCode=0 Jan 27 23:09:36 crc kubenswrapper[4803]: I0127 23:09:36.580960 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" event={"ID":"f978ff10-12ad-4883-98d9-7ce831fad147","Type":"ContainerDied","Data":"01d358f5c285efb0d85a58dc84fe3ddf3c305b211f25861b4e7f911bf4fbca0f"} Jan 27 23:09:36 crc kubenswrapper[4803]: I0127 23:09:36.581506 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" event={"ID":"f978ff10-12ad-4883-98d9-7ce831fad147","Type":"ContainerStarted","Data":"c5272258e2be46eaa9f46086d251c649f8e1d16542f8130b2d055cdfc25accfb"} Jan 27 23:09:40 crc kubenswrapper[4803]: I0127 23:09:40.629140 4803 generic.go:334] "Generic (PLEG): container finished" podID="41eec4af-3de4-482c-8399-28161630e7d3" containerID="5c72cb49946dff3165fd85fbcddca8d5ad78525d88ca3cfca12881ea4913827a" exitCode=0 Jan 27 23:09:40 crc kubenswrapper[4803]: I0127 23:09:40.629319 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" event={"ID":"41eec4af-3de4-482c-8399-28161630e7d3","Type":"ContainerDied","Data":"5c72cb49946dff3165fd85fbcddca8d5ad78525d88ca3cfca12881ea4913827a"} Jan 27 23:09:41 crc kubenswrapper[4803]: I0127 23:09:41.799515 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" Jan 27 23:09:41 crc kubenswrapper[4803]: I0127 23:09:41.843779 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7kj6c/crc-debug-zrlbk"] Jan 27 23:09:41 crc kubenswrapper[4803]: I0127 23:09:41.858649 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7kj6c/crc-debug-zrlbk"] Jan 27 23:09:41 crc kubenswrapper[4803]: I0127 23:09:41.871520 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4rht\" (UniqueName: \"kubernetes.io/projected/41eec4af-3de4-482c-8399-28161630e7d3-kube-api-access-p4rht\") pod \"41eec4af-3de4-482c-8399-28161630e7d3\" (UID: \"41eec4af-3de4-482c-8399-28161630e7d3\") " Jan 27 23:09:41 crc kubenswrapper[4803]: I0127 23:09:41.871763 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41eec4af-3de4-482c-8399-28161630e7d3-host\") pod \"41eec4af-3de4-482c-8399-28161630e7d3\" (UID: \"41eec4af-3de4-482c-8399-28161630e7d3\") " Jan 27 23:09:41 crc kubenswrapper[4803]: I0127 23:09:41.871868 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41eec4af-3de4-482c-8399-28161630e7d3-host" (OuterVolumeSpecName: "host") pod "41eec4af-3de4-482c-8399-28161630e7d3" (UID: "41eec4af-3de4-482c-8399-28161630e7d3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 23:09:41 crc kubenswrapper[4803]: I0127 23:09:41.872693 4803 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41eec4af-3de4-482c-8399-28161630e7d3-host\") on node \"crc\" DevicePath \"\"" Jan 27 23:09:41 crc kubenswrapper[4803]: I0127 23:09:41.881954 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41eec4af-3de4-482c-8399-28161630e7d3-kube-api-access-p4rht" (OuterVolumeSpecName: "kube-api-access-p4rht") pod "41eec4af-3de4-482c-8399-28161630e7d3" (UID: "41eec4af-3de4-482c-8399-28161630e7d3"). InnerVolumeSpecName "kube-api-access-p4rht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:09:41 crc kubenswrapper[4803]: I0127 23:09:41.975194 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4rht\" (UniqueName: \"kubernetes.io/projected/41eec4af-3de4-482c-8399-28161630e7d3-kube-api-access-p4rht\") on node \"crc\" DevicePath \"\"" Jan 27 23:09:42 crc kubenswrapper[4803]: I0127 23:09:42.319964 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41eec4af-3de4-482c-8399-28161630e7d3" path="/var/lib/kubelet/pods/41eec4af-3de4-482c-8399-28161630e7d3/volumes" Jan 27 23:09:42 crc kubenswrapper[4803]: I0127 23:09:42.651524 4803 scope.go:117] "RemoveContainer" containerID="5c72cb49946dff3165fd85fbcddca8d5ad78525d88ca3cfca12881ea4913827a" Jan 27 23:09:42 crc kubenswrapper[4803]: I0127 23:09:42.651545 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-zrlbk" Jan 27 23:09:42 crc kubenswrapper[4803]: I0127 23:09:42.995165 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7kj6c/crc-debug-75qpd"] Jan 27 23:09:42 crc kubenswrapper[4803]: E0127 23:09:42.996555 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41eec4af-3de4-482c-8399-28161630e7d3" containerName="container-00" Jan 27 23:09:42 crc kubenswrapper[4803]: I0127 23:09:42.996592 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="41eec4af-3de4-482c-8399-28161630e7d3" containerName="container-00" Jan 27 23:09:42 crc kubenswrapper[4803]: I0127 23:09:42.996806 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="41eec4af-3de4-482c-8399-28161630e7d3" containerName="container-00" Jan 27 23:09:42 crc kubenswrapper[4803]: I0127 23:09:42.997748 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-75qpd" Jan 27 23:09:43 crc kubenswrapper[4803]: I0127 23:09:43.104440 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqcqv\" (UniqueName: \"kubernetes.io/projected/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-kube-api-access-hqcqv\") pod \"crc-debug-75qpd\" (UID: \"e5305f8f-0c39-46f8-9eb7-cefdc2b07632\") " pod="openshift-must-gather-7kj6c/crc-debug-75qpd" Jan 27 23:09:43 crc kubenswrapper[4803]: I0127 23:09:43.104983 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-host\") pod \"crc-debug-75qpd\" (UID: \"e5305f8f-0c39-46f8-9eb7-cefdc2b07632\") " pod="openshift-must-gather-7kj6c/crc-debug-75qpd" Jan 27 23:09:43 crc kubenswrapper[4803]: I0127 23:09:43.215121 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-host\") pod \"crc-debug-75qpd\" (UID: \"e5305f8f-0c39-46f8-9eb7-cefdc2b07632\") " pod="openshift-must-gather-7kj6c/crc-debug-75qpd" Jan 27 23:09:43 crc kubenswrapper[4803]: I0127 23:09:43.215313 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-host\") pod \"crc-debug-75qpd\" (UID: \"e5305f8f-0c39-46f8-9eb7-cefdc2b07632\") " pod="openshift-must-gather-7kj6c/crc-debug-75qpd" Jan 27 23:09:43 crc kubenswrapper[4803]: I0127 23:09:43.215346 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqcqv\" (UniqueName: \"kubernetes.io/projected/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-kube-api-access-hqcqv\") pod \"crc-debug-75qpd\" (UID: \"e5305f8f-0c39-46f8-9eb7-cefdc2b07632\") " pod="openshift-must-gather-7kj6c/crc-debug-75qpd" Jan 27 23:09:43 crc kubenswrapper[4803]: I0127 23:09:43.238021 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqcqv\" (UniqueName: \"kubernetes.io/projected/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-kube-api-access-hqcqv\") pod \"crc-debug-75qpd\" (UID: \"e5305f8f-0c39-46f8-9eb7-cefdc2b07632\") " pod="openshift-must-gather-7kj6c/crc-debug-75qpd" Jan 27 23:09:43 crc kubenswrapper[4803]: I0127 23:09:43.317597 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-75qpd" Jan 27 23:09:43 crc kubenswrapper[4803]: I0127 23:09:43.661884 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/crc-debug-75qpd" event={"ID":"e5305f8f-0c39-46f8-9eb7-cefdc2b07632","Type":"ContainerStarted","Data":"f3c0767c5edd6d1cf4baa794d0887d77496dcfec20e184421664ce0bb779e4b0"} Jan 27 23:09:43 crc kubenswrapper[4803]: I0127 23:09:43.662251 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/crc-debug-75qpd" event={"ID":"e5305f8f-0c39-46f8-9eb7-cefdc2b07632","Type":"ContainerStarted","Data":"4aa7af7d00e3d4515f91887af8c4b243c9616cc1dbe6bd77035dce5e8ab66a83"} Jan 27 23:09:43 crc kubenswrapper[4803]: I0127 23:09:43.701302 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7kj6c/crc-debug-75qpd" podStartSLOduration=1.701278576 podStartE2EDuration="1.701278576s" podCreationTimestamp="2026-01-27 23:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 23:09:43.673517596 +0000 UTC m=+4936.089539295" watchObservedRunningTime="2026-01-27 23:09:43.701278576 +0000 UTC m=+4936.117300275" Jan 27 23:09:44 crc kubenswrapper[4803]: I0127 23:09:44.674628 4803 generic.go:334] "Generic (PLEG): container finished" podID="e5305f8f-0c39-46f8-9eb7-cefdc2b07632" containerID="f3c0767c5edd6d1cf4baa794d0887d77496dcfec20e184421664ce0bb779e4b0" exitCode=0 Jan 27 23:09:44 crc kubenswrapper[4803]: I0127 23:09:44.674671 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/crc-debug-75qpd" event={"ID":"e5305f8f-0c39-46f8-9eb7-cefdc2b07632","Type":"ContainerDied","Data":"f3c0767c5edd6d1cf4baa794d0887d77496dcfec20e184421664ce0bb779e4b0"} Jan 27 23:09:45 crc kubenswrapper[4803]: I0127 23:09:45.814327 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-75qpd" Jan 27 23:09:45 crc kubenswrapper[4803]: I0127 23:09:45.849560 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7kj6c/crc-debug-75qpd"] Jan 27 23:09:45 crc kubenswrapper[4803]: I0127 23:09:45.859838 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7kj6c/crc-debug-75qpd"] Jan 27 23:09:45 crc kubenswrapper[4803]: I0127 23:09:45.872218 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqcqv\" (UniqueName: \"kubernetes.io/projected/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-kube-api-access-hqcqv\") pod \"e5305f8f-0c39-46f8-9eb7-cefdc2b07632\" (UID: \"e5305f8f-0c39-46f8-9eb7-cefdc2b07632\") " Jan 27 23:09:45 crc kubenswrapper[4803]: I0127 23:09:45.872492 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-host\") pod \"e5305f8f-0c39-46f8-9eb7-cefdc2b07632\" (UID: \"e5305f8f-0c39-46f8-9eb7-cefdc2b07632\") " Jan 27 23:09:45 crc kubenswrapper[4803]: I0127 23:09:45.872560 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-host" (OuterVolumeSpecName: "host") pod "e5305f8f-0c39-46f8-9eb7-cefdc2b07632" (UID: "e5305f8f-0c39-46f8-9eb7-cefdc2b07632"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 23:09:45 crc kubenswrapper[4803]: I0127 23:09:45.873144 4803 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-host\") on node \"crc\" DevicePath \"\"" Jan 27 23:09:45 crc kubenswrapper[4803]: I0127 23:09:45.881998 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-kube-api-access-hqcqv" (OuterVolumeSpecName: "kube-api-access-hqcqv") pod "e5305f8f-0c39-46f8-9eb7-cefdc2b07632" (UID: "e5305f8f-0c39-46f8-9eb7-cefdc2b07632"). InnerVolumeSpecName "kube-api-access-hqcqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:09:45 crc kubenswrapper[4803]: I0127 23:09:45.974906 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqcqv\" (UniqueName: \"kubernetes.io/projected/e5305f8f-0c39-46f8-9eb7-cefdc2b07632-kube-api-access-hqcqv\") on node \"crc\" DevicePath \"\"" Jan 27 23:09:46 crc kubenswrapper[4803]: I0127 23:09:46.346184 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:09:46 crc kubenswrapper[4803]: I0127 23:09:46.346262 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:09:46 crc kubenswrapper[4803]: I0127 23:09:46.346690 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5305f8f-0c39-46f8-9eb7-cefdc2b07632" path="/var/lib/kubelet/pods/e5305f8f-0c39-46f8-9eb7-cefdc2b07632/volumes" Jan 27 23:09:46 crc kubenswrapper[4803]: I0127 23:09:46.695200 4803 scope.go:117] "RemoveContainer" containerID="f3c0767c5edd6d1cf4baa794d0887d77496dcfec20e184421664ce0bb779e4b0" Jan 27 23:09:46 crc kubenswrapper[4803]: I0127 23:09:46.695268 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-75qpd" Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.023339 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7kj6c/crc-debug-2p2r7"] Jan 27 23:09:47 crc kubenswrapper[4803]: E0127 23:09:47.024201 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5305f8f-0c39-46f8-9eb7-cefdc2b07632" containerName="container-00" Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.024217 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5305f8f-0c39-46f8-9eb7-cefdc2b07632" containerName="container-00" Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.024522 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5305f8f-0c39-46f8-9eb7-cefdc2b07632" containerName="container-00" Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.025562 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.102182 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a557703f-e168-45a1-b090-b9cdb114e0e1-host\") pod \"crc-debug-2p2r7\" (UID: \"a557703f-e168-45a1-b090-b9cdb114e0e1\") " pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.102365 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sctx\" (UniqueName: \"kubernetes.io/projected/a557703f-e168-45a1-b090-b9cdb114e0e1-kube-api-access-5sctx\") pod \"crc-debug-2p2r7\" (UID: \"a557703f-e168-45a1-b090-b9cdb114e0e1\") " pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.204945 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sctx\" (UniqueName: \"kubernetes.io/projected/a557703f-e168-45a1-b090-b9cdb114e0e1-kube-api-access-5sctx\") pod \"crc-debug-2p2r7\" (UID: \"a557703f-e168-45a1-b090-b9cdb114e0e1\") " pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.205520 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a557703f-e168-45a1-b090-b9cdb114e0e1-host\") pod \"crc-debug-2p2r7\" (UID: \"a557703f-e168-45a1-b090-b9cdb114e0e1\") " pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.205662 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a557703f-e168-45a1-b090-b9cdb114e0e1-host\") pod \"crc-debug-2p2r7\" (UID: \"a557703f-e168-45a1-b090-b9cdb114e0e1\") " pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.226039 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sctx\" (UniqueName: \"kubernetes.io/projected/a557703f-e168-45a1-b090-b9cdb114e0e1-kube-api-access-5sctx\") pod \"crc-debug-2p2r7\" (UID: \"a557703f-e168-45a1-b090-b9cdb114e0e1\") " pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.345605 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" Jan 27 23:09:47 crc kubenswrapper[4803]: W0127 23:09:47.387439 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda557703f_e168_45a1_b090_b9cdb114e0e1.slice/crio-cb962344910a51550e5ed2a091a4b12414d75c32bff6dd1b58a00b4de8d92a48 WatchSource:0}: Error finding container cb962344910a51550e5ed2a091a4b12414d75c32bff6dd1b58a00b4de8d92a48: Status 404 returned error can't find the container with id cb962344910a51550e5ed2a091a4b12414d75c32bff6dd1b58a00b4de8d92a48 Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.708496 4803 generic.go:334] "Generic (PLEG): container finished" podID="a557703f-e168-45a1-b090-b9cdb114e0e1" containerID="4b0b7fd58196878e11fd12313de9b7270a858106aedabc986a7b8b1383f38473" exitCode=0 Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.708560 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" event={"ID":"a557703f-e168-45a1-b090-b9cdb114e0e1","Type":"ContainerDied","Data":"4b0b7fd58196878e11fd12313de9b7270a858106aedabc986a7b8b1383f38473"} Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.708594 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" event={"ID":"a557703f-e168-45a1-b090-b9cdb114e0e1","Type":"ContainerStarted","Data":"cb962344910a51550e5ed2a091a4b12414d75c32bff6dd1b58a00b4de8d92a48"} Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.751705 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7kj6c/crc-debug-2p2r7"] Jan 27 23:09:47 crc kubenswrapper[4803]: I0127 23:09:47.762577 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7kj6c/crc-debug-2p2r7"] Jan 27 23:09:48 crc kubenswrapper[4803]: I0127 23:09:48.851351 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" Jan 27 23:09:48 crc kubenswrapper[4803]: I0127 23:09:48.946910 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sctx\" (UniqueName: \"kubernetes.io/projected/a557703f-e168-45a1-b090-b9cdb114e0e1-kube-api-access-5sctx\") pod \"a557703f-e168-45a1-b090-b9cdb114e0e1\" (UID: \"a557703f-e168-45a1-b090-b9cdb114e0e1\") " Jan 27 23:09:48 crc kubenswrapper[4803]: I0127 23:09:48.946997 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a557703f-e168-45a1-b090-b9cdb114e0e1-host\") pod \"a557703f-e168-45a1-b090-b9cdb114e0e1\" (UID: \"a557703f-e168-45a1-b090-b9cdb114e0e1\") " Jan 27 23:09:48 crc kubenswrapper[4803]: I0127 23:09:48.947095 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a557703f-e168-45a1-b090-b9cdb114e0e1-host" (OuterVolumeSpecName: "host") pod "a557703f-e168-45a1-b090-b9cdb114e0e1" (UID: "a557703f-e168-45a1-b090-b9cdb114e0e1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 23:09:48 crc kubenswrapper[4803]: I0127 23:09:48.947618 4803 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a557703f-e168-45a1-b090-b9cdb114e0e1-host\") on node \"crc\" DevicePath \"\"" Jan 27 23:09:48 crc kubenswrapper[4803]: I0127 23:09:48.953665 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a557703f-e168-45a1-b090-b9cdb114e0e1-kube-api-access-5sctx" (OuterVolumeSpecName: "kube-api-access-5sctx") pod "a557703f-e168-45a1-b090-b9cdb114e0e1" (UID: "a557703f-e168-45a1-b090-b9cdb114e0e1"). InnerVolumeSpecName "kube-api-access-5sctx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:09:49 crc kubenswrapper[4803]: I0127 23:09:49.050218 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sctx\" (UniqueName: \"kubernetes.io/projected/a557703f-e168-45a1-b090-b9cdb114e0e1-kube-api-access-5sctx\") on node \"crc\" DevicePath \"\"" Jan 27 23:09:49 crc kubenswrapper[4803]: I0127 23:09:49.730130 4803 scope.go:117] "RemoveContainer" containerID="4b0b7fd58196878e11fd12313de9b7270a858106aedabc986a7b8b1383f38473" Jan 27 23:09:49 crc kubenswrapper[4803]: I0127 23:09:49.730310 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/crc-debug-2p2r7" Jan 27 23:09:50 crc kubenswrapper[4803]: I0127 23:09:50.319626 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a557703f-e168-45a1-b090-b9cdb114e0e1" path="/var/lib/kubelet/pods/a557703f-e168-45a1-b090-b9cdb114e0e1/volumes" Jan 27 23:09:53 crc kubenswrapper[4803]: I0127 23:09:53.784932 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 23:09:53 crc kubenswrapper[4803]: I0127 23:09:53.785464 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 23:10:13 crc kubenswrapper[4803]: I0127 23:10:13.790280 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 23:10:13 crc kubenswrapper[4803]: I0127 23:10:13.798065 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5dc8cc774c-42hcg" Jan 27 23:10:16 crc kubenswrapper[4803]: I0127 23:10:16.343195 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:10:16 crc kubenswrapper[4803]: I0127 23:10:16.343656 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:10:17 crc kubenswrapper[4803]: I0127 23:10:17.743186 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8181a2a9-82ef-4176-b0fd-b333b51abb84/aodh-listener/0.log" Jan 27 23:10:17 crc kubenswrapper[4803]: I0127 23:10:17.751554 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8181a2a9-82ef-4176-b0fd-b333b51abb84/aodh-api/0.log" Jan 27 23:10:17 crc kubenswrapper[4803]: I0127 23:10:17.759676 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8181a2a9-82ef-4176-b0fd-b333b51abb84/aodh-evaluator/0.log" Jan 27 23:10:17 crc kubenswrapper[4803]: I0127 23:10:17.915297 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8181a2a9-82ef-4176-b0fd-b333b51abb84/aodh-notifier/0.log" Jan 27 23:10:18 crc kubenswrapper[4803]: I0127 23:10:18.051566 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f74474d96-gcmdd_d0ec158b-9237-431b-a0ac-0b6d236706b3/barbican-api/0.log" Jan 27 23:10:18 crc kubenswrapper[4803]: I0127 23:10:18.063788 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f74474d96-gcmdd_d0ec158b-9237-431b-a0ac-0b6d236706b3/barbican-api-log/0.log" Jan 27 23:10:18 crc kubenswrapper[4803]: I0127 23:10:18.157353 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7ff7599c4b-9kdgh_857f23da-b896-42a6-bb08-e30d5e58a207/barbican-keystone-listener/0.log" Jan 27 23:10:18 crc kubenswrapper[4803]: I0127 23:10:18.341516 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7ff7599c4b-9kdgh_857f23da-b896-42a6-bb08-e30d5e58a207/barbican-keystone-listener-log/0.log" Jan 27 23:10:18 crc kubenswrapper[4803]: I0127 23:10:18.343116 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d5449dd6c-29g4b_2bc83a90-d100-4aaf-b9d1-b41d1791a9f7/barbican-worker/0.log" Jan 27 23:10:18 crc kubenswrapper[4803]: I0127 23:10:18.361555 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d5449dd6c-29g4b_2bc83a90-d100-4aaf-b9d1-b41d1791a9f7/barbican-worker-log/0.log" Jan 27 23:10:18 crc kubenswrapper[4803]: I0127 23:10:18.591157 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b618bdc8-5c20-45d8-be2e-e7c1379fa992/ceilometer-central-agent/0.log" Jan 27 23:10:18 crc kubenswrapper[4803]: I0127 23:10:18.594822 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-v5q8f_e95bd3a3-5cb5-47c7-906d-addca2c174a3/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:18 crc kubenswrapper[4803]: I0127 23:10:18.759396 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b618bdc8-5c20-45d8-be2e-e7c1379fa992/ceilometer-notification-agent/0.log" Jan 27 23:10:18 crc kubenswrapper[4803]: I0127 23:10:18.799817 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b618bdc8-5c20-45d8-be2e-e7c1379fa992/proxy-httpd/0.log" Jan 27 23:10:18 crc kubenswrapper[4803]: I0127 23:10:18.806994 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b618bdc8-5c20-45d8-be2e-e7c1379fa992/sg-core/0.log" Jan 27 23:10:19 crc kubenswrapper[4803]: I0127 23:10:19.514153 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_69038d7c-7d07-4b92-a041-c27addfb7fba/cinder-api-log/0.log" Jan 27 23:10:19 crc kubenswrapper[4803]: I0127 23:10:19.536893 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_69038d7c-7d07-4b92-a041-c27addfb7fba/cinder-api/0.log" Jan 27 23:10:19 crc kubenswrapper[4803]: I0127 23:10:19.582136 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3427d6c9-1902-41c1-8b41-fa9f2cc92dc7/cinder-scheduler/1.log" Jan 27 23:10:19 crc kubenswrapper[4803]: I0127 23:10:19.723151 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3427d6c9-1902-41c1-8b41-fa9f2cc92dc7/cinder-scheduler/0.log" Jan 27 23:10:19 crc kubenswrapper[4803]: I0127 23:10:19.767120 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3427d6c9-1902-41c1-8b41-fa9f2cc92dc7/probe/0.log" Jan 27 23:10:19 crc kubenswrapper[4803]: I0127 23:10:19.827462 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ww7wb_d08ec8ee-bdca-4f63-b951-abfbe94d188e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:20 crc kubenswrapper[4803]: I0127 23:10:20.092794 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-cbkcg_a626642b-e30b-4c1a-bf3d-aa1b6506002a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:20 crc kubenswrapper[4803]: I0127 23:10:20.094813 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-lzmj5_cd461b1e-89bc-4eb8-8884-bf6031e2784d/init/0.log" Jan 27 23:10:20 crc kubenswrapper[4803]: I0127 23:10:20.302813 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-lzmj5_cd461b1e-89bc-4eb8-8884-bf6031e2784d/init/0.log" Jan 27 23:10:20 crc kubenswrapper[4803]: I0127 23:10:20.400507 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-fppg9_df3f9adb-ad8a-484b-89f7-fb1689886470/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:20 crc kubenswrapper[4803]: I0127 23:10:20.420567 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-lzmj5_cd461b1e-89bc-4eb8-8884-bf6031e2784d/dnsmasq-dns/0.log" Jan 27 23:10:20 crc kubenswrapper[4803]: I0127 23:10:20.596816 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f515d455-caa5-4c15-a824-f9dd3d46d1b7/glance-httpd/0.log" Jan 27 23:10:20 crc kubenswrapper[4803]: I0127 23:10:20.610887 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f515d455-caa5-4c15-a824-f9dd3d46d1b7/glance-log/0.log" Jan 27 23:10:20 crc kubenswrapper[4803]: I0127 23:10:20.827010 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67/glance-httpd/0.log" Jan 27 23:10:20 crc kubenswrapper[4803]: I0127 23:10:20.835478 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9b8e55bc-f78b-4eae-8b59-97fd0eb9ef67/glance-log/0.log" Jan 27 23:10:21 crc kubenswrapper[4803]: I0127 23:10:21.356159 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vvx8q_0384ac7e-8b90-4801-85ee-ed8323cc2d73/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:21 crc kubenswrapper[4803]: I0127 23:10:21.469037 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5f485b9957-lsqx4_98ec2eb2-113b-451e-afe2-1e23b2cc656d/heat-engine/0.log" Jan 27 23:10:21 crc kubenswrapper[4803]: I0127 23:10:21.500964 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6fc9ffcfc8-pvv2f_32fd7e71-3e64-4163-895f-7e73ef8a39af/heat-api/0.log" Jan 27 23:10:21 crc kubenswrapper[4803]: I0127 23:10:21.583603 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5b8fd6fc4f-nzv5v_6c535a08-5927-403e-9587-616393dd2091/heat-cfnapi/0.log" Jan 27 23:10:21 crc kubenswrapper[4803]: I0127 23:10:21.604013 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7fd9b_321c0a06-cd6e-491b-a376-526a87eb7392/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:21 crc kubenswrapper[4803]: I0127 23:10:21.807518 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492581-qjd4h_b3b21dcd-161c-4e90-adc7-292a7ff99d86/keystone-cron/0.log" Jan 27 23:10:22 crc kubenswrapper[4803]: I0127 23:10:22.034103 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bfd832f4-d1c8-4283-b3cb-55cd225022e4/kube-state-metrics/1.log" Jan 27 23:10:22 crc kubenswrapper[4803]: I0127 23:10:22.058646 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bfd832f4-d1c8-4283-b3cb-55cd225022e4/kube-state-metrics/0.log" Jan 27 23:10:22 crc kubenswrapper[4803]: I0127 23:10:22.161819 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-g567j_8f694a10-2165-4256-8f2e-8c7691864c37/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:22 crc kubenswrapper[4803]: I0127 23:10:22.339555 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-tqmbq_035bdbf8-512b-42d2-ab7f-fd357ea4fa98/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:22 crc kubenswrapper[4803]: I0127 23:10:22.527392 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-75677f8887-xwsk2_91aac3c2-75e7-4359-8d5f-96ddab2abae2/keystone-api/0.log" Jan 27 23:10:22 crc kubenswrapper[4803]: I0127 23:10:22.758084 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_2e41c64d-3e0f-4862-9d78-1d3fd0e9fbe2/mysqld-exporter/0.log" Jan 27 23:10:23 crc kubenswrapper[4803]: I0127 23:10:23.053451 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f556d549c-2bkn4_9d70e5d4-03d3-451e-9ef4-8f88d42a015c/neutron-httpd/0.log" Jan 27 23:10:23 crc kubenswrapper[4803]: I0127 23:10:23.130064 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f556d549c-2bkn4_9d70e5d4-03d3-451e-9ef4-8f88d42a015c/neutron-api/0.log" Jan 27 23:10:23 crc kubenswrapper[4803]: I0127 23:10:23.327129 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-27xqk_bc53f142-98d1-4024-b27b-923de13b8c31/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:23 crc kubenswrapper[4803]: I0127 23:10:23.797355 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_53165673-bed1-401b-aa0d-97d59c239f08/nova-api-log/0.log" Jan 27 23:10:23 crc kubenswrapper[4803]: I0127 23:10:23.804370 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_73c23100-792f-4ce4-9c03-55ffb04e5538/nova-cell0-conductor-conductor/0.log" Jan 27 23:10:24 crc kubenswrapper[4803]: I0127 23:10:24.097360 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_53165673-bed1-401b-aa0d-97d59c239f08/nova-api-api/0.log" Jan 27 23:10:24 crc kubenswrapper[4803]: I0127 23:10:24.109965 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a06bce4f-9283-47de-bf15-5b1ae229961e/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 23:10:24 crc kubenswrapper[4803]: I0127 23:10:24.111369 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ba7b63c3-7320-4c4b-b099-d2f9c78abeec/nova-cell1-conductor-conductor/0.log" Jan 27 23:10:24 crc kubenswrapper[4803]: I0127 23:10:24.401710 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-blsvv_8809871a-286e-42ac-8156-13ad485cf174/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:24 crc kubenswrapper[4803]: I0127 23:10:24.482409 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3e71e913-f6e1-4eba-8c26-4ce021672adf/nova-metadata-log/0.log" Jan 27 23:10:24 crc kubenswrapper[4803]: I0127 23:10:24.766529 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0cf03068-465b-47ff-8616-7e2af8360631/nova-scheduler-scheduler/0.log" Jan 27 23:10:24 crc kubenswrapper[4803]: I0127 23:10:24.904069 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4493a984-e728-410f-9362-0795391f2793/mysql-bootstrap/0.log" Jan 27 23:10:25 crc kubenswrapper[4803]: I0127 23:10:25.072985 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4493a984-e728-410f-9362-0795391f2793/mysql-bootstrap/0.log" Jan 27 23:10:25 crc kubenswrapper[4803]: I0127 23:10:25.160203 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4493a984-e728-410f-9362-0795391f2793/galera/0.log" Jan 27 23:10:25 crc kubenswrapper[4803]: I0127 23:10:25.183993 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4493a984-e728-410f-9362-0795391f2793/galera/1.log" Jan 27 23:10:25 crc kubenswrapper[4803]: I0127 23:10:25.409743 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c78b382-5735-4741-b087-cefda68053f4/mysql-bootstrap/0.log" Jan 27 23:10:25 crc kubenswrapper[4803]: I0127 23:10:25.568884 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c78b382-5735-4741-b087-cefda68053f4/mysql-bootstrap/0.log" Jan 27 23:10:25 crc kubenswrapper[4803]: I0127 23:10:25.680279 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c78b382-5735-4741-b087-cefda68053f4/galera/0.log" Jan 27 23:10:25 crc kubenswrapper[4803]: I0127 23:10:25.703718 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c78b382-5735-4741-b087-cefda68053f4/galera/1.log" Jan 27 23:10:25 crc kubenswrapper[4803]: I0127 23:10:25.909678 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_70c0e109-5a8c-4c70-87a6-bc31ed1a001d/openstackclient/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.058992 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-x2crv_ea119635-c5fa-46da-b030-9b0cbc93cfa8/openstack-network-exporter/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.132243 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_16121bd0-7cdd-487b-a269-a2c6cfb35d76/memcached/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.160339 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5ch2x_302d32b5-3246-4bbc-877e-700ecd30afbd/ovsdb-server-init/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.167805 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3e71e913-f6e1-4eba-8c26-4ce021672adf/nova-metadata-metadata/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.336270 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5ch2x_302d32b5-3246-4bbc-877e-700ecd30afbd/ovsdb-server-init/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.345883 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5ch2x_302d32b5-3246-4bbc-877e-700ecd30afbd/ovs-vswitchd/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.381541 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xfps2_3f1dc5cb-1275-4cf9-8c71-f9575161f73f/ovn-controller/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.382425 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5ch2x_302d32b5-3246-4bbc-877e-700ecd30afbd/ovsdb-server/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.589612 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-nqn7k_a0677e52-1a37-44b2-9627-6cb40b6d6f6d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.598997 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5a85bcae-8159-430e-bf60-b94ca19c4131/openstack-network-exporter/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.611819 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5a85bcae-8159-430e-bf60-b94ca19c4131/ovn-northd/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.742089 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cb0e5b16-8baa-435a-bae9-7b09e5602b43/openstack-network-exporter/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.770197 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cb0e5b16-8baa-435a-bae9-7b09e5602b43/ovsdbserver-nb/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.810047 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8a269bc9-9bdc-4d66-b435-2ec777b4bdcd/openstack-network-exporter/0.log" Jan 27 23:10:26 crc kubenswrapper[4803]: I0127 23:10:26.920217 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8a269bc9-9bdc-4d66-b435-2ec777b4bdcd/ovsdbserver-sb/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.068822 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5b8df6b68b-dmsbm_71b5940f-523d-4fce-b807-5db4fc97336d/placement-api/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.087712 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5b8df6b68b-dmsbm_71b5940f-523d-4fce-b807-5db4fc97336d/placement-log/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.140496 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f9122f89-a56c-47d7-ad05-9aab6acdcc2f/init-config-reloader/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.274938 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f9122f89-a56c-47d7-ad05-9aab6acdcc2f/init-config-reloader/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.302022 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f9122f89-a56c-47d7-ad05-9aab6acdcc2f/thanos-sidecar/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.303898 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f9122f89-a56c-47d7-ad05-9aab6acdcc2f/config-reloader/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.315136 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f9122f89-a56c-47d7-ad05-9aab6acdcc2f/prometheus/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.459282 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_71236ece-7761-4d82-a93c-c5b40c33660b/setup-container/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.651487 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_71236ece-7761-4d82-a93c-c5b40c33660b/setup-container/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.658522 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d2af9573-4bb0-4528-a405-959329fbe7d7/setup-container/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.678533 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_71236ece-7761-4d82-a93c-c5b40c33660b/rabbitmq/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.862595 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d2af9573-4bb0-4528-a405-959329fbe7d7/rabbitmq/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.884329 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d2af9573-4bb0-4528-a405-959329fbe7d7/setup-container/0.log" Jan 27 23:10:27 crc kubenswrapper[4803]: I0127 23:10:27.925257 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_841c6d24-8f9e-401f-8045-0e76e7d93754/setup-container/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.069451 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_841c6d24-8f9e-401f-8045-0e76e7d93754/setup-container/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.109484 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_841c6d24-8f9e-401f-8045-0e76e7d93754/rabbitmq/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.116367 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_3998c673-ac46-4c45-a424-a92a7e88853c/setup-container/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.299285 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_3998c673-ac46-4c45-a424-a92a7e88853c/setup-container/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.326249 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vpg4d_b8ff2541-3983-461b-bbf6-20c732f107f0/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.338962 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_3998c673-ac46-4c45-a424-a92a7e88853c/rabbitmq/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.466787 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-j26c9_989f334d-f101-4247-9465-d4bf4c4732b8/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.491127 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-7n9pk_8ff38750-3be9-4d41-a4c7-5c2f8abd0880/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.602752 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-fbbk7_121278dd-a3d1-4108-8a1a-2995e0ec2517/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.652753 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-92xsq_2077afa2-d0de-4ed0-ad3d-289cba1c27a5/ssh-known-hosts-edpm-deployment/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.836156 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-54c764888c-dpmfw_912aaad5-2b5b-431b-821f-0ba813a0faaf/proxy-server/0.log" Jan 27 23:10:28 crc kubenswrapper[4803]: I0127 23:10:28.879707 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-54c764888c-dpmfw_912aaad5-2b5b-431b-821f-0ba813a0faaf/proxy-httpd/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.453117 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-96md4_33e4fbb3-3248-49d9-8302-cf3f0bc8ef00/swift-ring-rebalance/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.474801 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/account-auditor/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.499317 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/account-reaper/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.503587 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/account-replicator/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.646805 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/container-auditor/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.688428 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/container-server/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.691387 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/container-replicator/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.701499 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/account-server/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.728896 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/container-updater/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.849364 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/object-auditor/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.885200 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/object-expirer/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.906832 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/object-replicator/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.911256 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/object-server/0.log" Jan 27 23:10:29 crc kubenswrapper[4803]: I0127 23:10:29.931224 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/object-updater/0.log" Jan 27 23:10:30 crc kubenswrapper[4803]: I0127 23:10:30.011782 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/rsync/0.log" Jan 27 23:10:30 crc kubenswrapper[4803]: I0127 23:10:30.062679 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72f06f5c-7c0f-4969-89a2-b16210f935c4/swift-recon-cron/0.log" Jan 27 23:10:30 crc kubenswrapper[4803]: I0127 23:10:30.145939 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-plgl7_a9b994a1-9306-48ee-a202-62a8506f2f15/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:30 crc kubenswrapper[4803]: I0127 23:10:30.243926 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-2hkbc_7b71eaf1-b828-42a9-8fae-452c3d2f628e/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:30 crc kubenswrapper[4803]: I0127 23:10:30.378164 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b81b69f7-fd9a-45b7-9c1c-89365a2e6ea8/test-operator-logs-container/0.log" Jan 27 23:10:30 crc kubenswrapper[4803]: I0127 23:10:30.525071 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4k7vk_54328a1c-1655-4d76-9301-a0f71cc5c59d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 23:10:30 crc kubenswrapper[4803]: I0127 23:10:30.761420 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9af7a299-6a76-452c-854d-d80a082dabf1/tempest-tests-tempest-tests-runner/0.log" Jan 27 23:10:46 crc kubenswrapper[4803]: I0127 23:10:46.343790 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:10:46 crc kubenswrapper[4803]: I0127 23:10:46.344478 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:10:46 crc kubenswrapper[4803]: I0127 23:10:46.344534 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 23:10:46 crc kubenswrapper[4803]: I0127 23:10:46.345199 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 23:10:46 crc kubenswrapper[4803]: I0127 23:10:46.345262 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" gracePeriod=600 Jan 27 23:10:46 crc kubenswrapper[4803]: E0127 23:10:46.487306 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:10:47 crc kubenswrapper[4803]: I0127 23:10:47.360308 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" exitCode=0 Jan 27 23:10:47 crc kubenswrapper[4803]: I0127 23:10:47.360390 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d"} Jan 27 23:10:47 crc kubenswrapper[4803]: I0127 23:10:47.360588 4803 scope.go:117] "RemoveContainer" containerID="e195e4590bf4eb00374d7f4aa7585484d9570421738b754585197e9eadc6e0e7" Jan 27 23:10:47 crc kubenswrapper[4803]: I0127 23:10:47.361540 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:10:47 crc kubenswrapper[4803]: E0127 23:10:47.362059 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:10:58 crc kubenswrapper[4803]: I0127 23:10:58.023062 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z_2b8a86ce-01e7-4e15-9da4-d2a34f35acbb/util/0.log" Jan 27 23:10:58 crc kubenswrapper[4803]: I0127 23:10:58.173322 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z_2b8a86ce-01e7-4e15-9da4-d2a34f35acbb/util/0.log" Jan 27 23:10:58 crc kubenswrapper[4803]: I0127 23:10:58.180313 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z_2b8a86ce-01e7-4e15-9da4-d2a34f35acbb/pull/0.log" Jan 27 23:10:58 crc kubenswrapper[4803]: I0127 23:10:58.235292 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z_2b8a86ce-01e7-4e15-9da4-d2a34f35acbb/pull/0.log" Jan 27 23:10:58 crc kubenswrapper[4803]: I0127 23:10:58.401885 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z_2b8a86ce-01e7-4e15-9da4-d2a34f35acbb/util/0.log" Jan 27 23:10:58 crc kubenswrapper[4803]: I0127 23:10:58.441176 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z_2b8a86ce-01e7-4e15-9da4-d2a34f35acbb/pull/0.log" Jan 27 23:10:58 crc kubenswrapper[4803]: I0127 23:10:58.456524 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_237be74c3096651430a55ac4b7bc110bd813f91baa762824f8a720927a4tz9z_2b8a86ce-01e7-4e15-9da4-d2a34f35acbb/extract/0.log" Jan 27 23:10:58 crc kubenswrapper[4803]: I0127 23:10:58.655182 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-5qnbd_eac7ef2c-904d-429b-ac3f-a43a72339fde/manager/0.log" Jan 27 23:10:58 crc kubenswrapper[4803]: I0127 23:10:58.678717 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-t9ng6_47dce22a-001c-4774-ab99-28cd85420e1c/manager/0.log" Jan 27 23:10:58 crc kubenswrapper[4803]: I0127 23:10:58.807558 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-hxpmk_51221b4b-024e-4134-8baa-a9478c8c596a/manager/0.log" Jan 27 23:10:58 crc kubenswrapper[4803]: I0127 23:10:58.924898 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-pcnl7_c6f78887-1cda-463f-ab3f-57703bfb7a41/manager/0.log" Jan 27 23:10:59 crc kubenswrapper[4803]: I0127 23:10:59.004683 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-2sffc_f8498dfc-1b67-4783-9389-10d5b30b2860/manager/1.log" Jan 27 23:10:59 crc kubenswrapper[4803]: I0127 23:10:59.146344 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-2sffc_f8498dfc-1b67-4783-9389-10d5b30b2860/manager/0.log" Jan 27 23:10:59 crc kubenswrapper[4803]: I0127 23:10:59.155296 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-7sjdg_9c6792d4-9d18-4d1c-b855-65aba5ae4919/manager/0.log" Jan 27 23:10:59 crc kubenswrapper[4803]: I0127 23:10:59.307518 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:10:59 crc kubenswrapper[4803]: E0127 23:10:59.308136 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:10:59 crc kubenswrapper[4803]: I0127 23:10:59.349347 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-w8nw7_29a3b26e-0f66-4f80-9f5f-4cf3d6c4e4a8/manager/0.log" Jan 27 23:10:59 crc kubenswrapper[4803]: I0127 23:10:59.683993 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-r5dqr_1f1cd413-71e0-443e-95cf-e5d46a745b1b/manager/0.log" Jan 27 23:10:59 crc kubenswrapper[4803]: I0127 23:10:59.686584 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-nxlck_e9d93e19-7c2b-4d53-bfe8-7b0157dec931/manager/0.log" Jan 27 23:10:59 crc kubenswrapper[4803]: I0127 23:10:59.694484 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-h9xdv_662a79ef-9928-408c-8cfb-62945e0b6725/manager/0.log" Jan 27 23:10:59 crc kubenswrapper[4803]: I0127 23:10:59.887989 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-26gcs_35783fb5-ef1c-4b33-beb1-af9fee8512d3/manager/0.log" Jan 27 23:10:59 crc kubenswrapper[4803]: I0127 23:10:59.927345 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-t9zrn_c46ecfda-be7b-4f42-9874-a8a94f71188f/manager/0.log" Jan 27 23:11:00 crc kubenswrapper[4803]: I0127 23:11:00.118863 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-qg2hw_7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79/manager/1.log" Jan 27 23:11:00 crc kubenswrapper[4803]: I0127 23:11:00.163484 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-gst8v_b6c89c2e-a080-4d20-bc81-bda0f9eb17b6/manager/0.log" Jan 27 23:11:00 crc kubenswrapper[4803]: I0127 23:11:00.260722 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-qg2hw_7e4f1d8f-cbc3-4a33-9aa7-9fb0375fcd79/manager/0.log" Jan 27 23:11:00 crc kubenswrapper[4803]: I0127 23:11:00.333031 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt_5bedb1c3-9c5a-4137-851d-33b1723a3221/manager/1.log" Jan 27 23:11:00 crc kubenswrapper[4803]: I0127 23:11:00.368547 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8544wbxt_5bedb1c3-9c5a-4137-851d-33b1723a3221/manager/0.log" Jan 27 23:11:00 crc kubenswrapper[4803]: I0127 23:11:00.821020 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-75cd85946-nk8z5_e163066d-c764-49e0-9119-cbeb4f4fe50b/operator/0.log" Jan 27 23:11:00 crc kubenswrapper[4803]: I0127 23:11:00.825405 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tp8d4_b438c007-ef5f-4ed3-8f81-c5ac6d0209ac/registry-server/1.log" Jan 27 23:11:00 crc kubenswrapper[4803]: I0127 23:11:00.924923 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tp8d4_b438c007-ef5f-4ed3-8f81-c5ac6d0209ac/registry-server/0.log" Jan 27 23:11:01 crc kubenswrapper[4803]: I0127 23:11:01.187259 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-hcwxh_0592ab2d-4ade-4747-a823-73cd5dcac047/manager/0.log" Jan 27 23:11:01 crc kubenswrapper[4803]: I0127 23:11:01.919838 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5g5g7_293c9c98-184e-45cb-b0be-593f544e49df/operator/0.log" Jan 27 23:11:01 crc kubenswrapper[4803]: I0127 23:11:01.961620 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-prltl_35742b16-a222-4602-ae0a-d078eafb1ea1/manager/0.log" Jan 27 23:11:02 crc kubenswrapper[4803]: I0127 23:11:02.148896 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4rzpc_eae71f44-8628-4436-be64-9ac3aa8f9255/manager/0.log" Jan 27 23:11:02 crc kubenswrapper[4803]: I0127 23:11:02.157139 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-64f565f6ff-2xjcl_62a498d3-45eb-4117-ba22-041e8d90762d/manager/0.log" Jan 27 23:11:02 crc kubenswrapper[4803]: I0127 23:11:02.347826 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-9hlvn_7b65a167-f9c8-475c-be5b-39e0502352ab/manager/0.log" Jan 27 23:11:02 crc kubenswrapper[4803]: I0127 23:11:02.466546 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-tz8ql_57c28f35-52f1-48aa-ad74-3f66a5cdd52c/manager/1.log" Jan 27 23:11:02 crc kubenswrapper[4803]: I0127 23:11:02.494862 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7948f6cfb4-mpkbs_9dde9803-1302-4f0f-a353-1313e3696d7b/manager/0.log" Jan 27 23:11:02 crc kubenswrapper[4803]: I0127 23:11:02.532875 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-tz8ql_57c28f35-52f1-48aa-ad74-3f66a5cdd52c/manager/0.log" Jan 27 23:11:12 crc kubenswrapper[4803]: I0127 23:11:12.307619 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:11:12 crc kubenswrapper[4803]: E0127 23:11:12.308361 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:11:21 crc kubenswrapper[4803]: I0127 23:11:21.793816 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wpzf9_8f440e60-e9e3-43ef-93ca-9b27adeac069/control-plane-machine-set-operator/0.log" Jan 27 23:11:21 crc kubenswrapper[4803]: I0127 23:11:21.983009 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8lpmj_e2308949-6865-4d3b-ad3b-1de5c42149b8/kube-rbac-proxy/0.log" Jan 27 23:11:21 crc kubenswrapper[4803]: I0127 23:11:21.990890 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8lpmj_e2308949-6865-4d3b-ad3b-1de5c42149b8/machine-api-operator/0.log" Jan 27 23:11:23 crc kubenswrapper[4803]: I0127 23:11:23.307740 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:11:23 crc kubenswrapper[4803]: E0127 23:11:23.308301 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:11:35 crc kubenswrapper[4803]: I0127 23:11:35.192297 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-4st4q_3354336a-cebc-4270-8c96-379cfa5682b8/cert-manager-controller/0.log" Jan 27 23:11:35 crc kubenswrapper[4803]: I0127 23:11:35.386817 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-mvwfx_601f83b4-7a3d-49fe-9674-58798267d78c/cert-manager-cainjector/0.log" Jan 27 23:11:35 crc kubenswrapper[4803]: I0127 23:11:35.419032 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-99277_021b5278-1b81-43b3-ae44-ec231fb77687/cert-manager-webhook/0.log" Jan 27 23:11:38 crc kubenswrapper[4803]: I0127 23:11:38.315805 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:11:38 crc kubenswrapper[4803]: E0127 23:11:38.318713 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:11:49 crc kubenswrapper[4803]: I0127 23:11:49.307362 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:11:49 crc kubenswrapper[4803]: E0127 23:11:49.308396 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:11:52 crc kubenswrapper[4803]: I0127 23:11:52.025831 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7kfnq_e9e0ba93-d76c-4c79-ac8e-cb250366ce7a/nmstate-console-plugin/0.log" Jan 27 23:11:52 crc kubenswrapper[4803]: I0127 23:11:52.207452 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wrzxs_89a353b4-798b-4f55-91ff-316a9840a7bb/nmstate-handler/0.log" Jan 27 23:11:52 crc kubenswrapper[4803]: I0127 23:11:52.262668 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mt2x7_bd2efa75-5c9b-4f23-a284-9f69ae3587af/nmstate-metrics/0.log" Jan 27 23:11:52 crc kubenswrapper[4803]: I0127 23:11:52.274927 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mt2x7_bd2efa75-5c9b-4f23-a284-9f69ae3587af/kube-rbac-proxy/0.log" Jan 27 23:11:53 crc kubenswrapper[4803]: I0127 23:11:53.116707 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-bqlpm_77dd058d-f38b-4382-923d-f68fbb3c9566/nmstate-webhook/0.log" Jan 27 23:11:53 crc kubenswrapper[4803]: I0127 23:11:53.120929 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-bdqpw_7626f07b-4412-434f-87b9-406475aa7a90/nmstate-operator/0.log" Jan 27 23:12:02 crc kubenswrapper[4803]: I0127 23:12:02.308007 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:12:02 crc kubenswrapper[4803]: E0127 23:12:02.308934 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:12:06 crc kubenswrapper[4803]: I0127 23:12:06.749390 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b65d5f66c-f2bd5_51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d/kube-rbac-proxy/0.log" Jan 27 23:12:06 crc kubenswrapper[4803]: I0127 23:12:06.809108 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b65d5f66c-f2bd5_51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d/manager/0.log" Jan 27 23:12:13 crc kubenswrapper[4803]: I0127 23:12:13.311900 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:12:13 crc kubenswrapper[4803]: E0127 23:12:13.313392 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:12:21 crc kubenswrapper[4803]: I0127 23:12:21.079178 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qtnmg_67bbe061-3ab2-43cf-9579-900c0ff65da9/prometheus-operator/0.log" Jan 27 23:12:21 crc kubenswrapper[4803]: I0127 23:12:21.215365 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_48ffb065-6bf7-4b9c-981e-f834ead82767/prometheus-operator-admission-webhook/0.log" Jan 27 23:12:21 crc kubenswrapper[4803]: I0127 23:12:21.240508 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_eed68546-4e6f-4551-95ab-7e870b098179/prometheus-operator-admission-webhook/0.log" Jan 27 23:12:21 crc kubenswrapper[4803]: I0127 23:12:21.404910 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-skn2q_69126409-4642-4d42-855d-e7325b3de7c5/operator/1.log" Jan 27 23:12:21 crc kubenswrapper[4803]: I0127 23:12:21.484004 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-skn2q_69126409-4642-4d42-855d-e7325b3de7c5/operator/0.log" Jan 27 23:12:21 crc kubenswrapper[4803]: I0127 23:12:21.492029 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-zj24g_7dbfecf3-a077-4d96-b7d5-d81b1c744194/observability-ui-dashboards/0.log" Jan 27 23:12:21 crc kubenswrapper[4803]: I0127 23:12:21.665484 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-nfxjq_5b3c1908-cc42-4af3-a73d-916466d38dd6/perses-operator/0.log" Jan 27 23:12:27 crc kubenswrapper[4803]: I0127 23:12:27.307300 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:12:27 crc kubenswrapper[4803]: E0127 23:12:27.308185 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:12:37 crc kubenswrapper[4803]: I0127 23:12:37.904961 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-n4gqd_98bff570-ae6c-423d-8a0b-0d2aed9e0853/cluster-logging-operator/0.log" Jan 27 23:12:38 crc kubenswrapper[4803]: I0127 23:12:38.124072 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-dg4sw_0f80595e-9f2c-44d6-af65-29acb22c23d0/collector/0.log" Jan 27 23:12:38 crc kubenswrapper[4803]: I0127 23:12:38.162967 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_a4c26ad1-a645-4746-9c19-c7bbda04000c/loki-compactor/0.log" Jan 27 23:12:38 crc kubenswrapper[4803]: I0127 23:12:38.314666 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:12:38 crc kubenswrapper[4803]: E0127 23:12:38.314918 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:12:38 crc kubenswrapper[4803]: I0127 23:12:38.345511 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8597d8df56-dkqb6_806f03eb-fc44-4b50-953e-d4101abd8bc3/gateway/0.log" Jan 27 23:12:38 crc kubenswrapper[4803]: I0127 23:12:38.352661 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-zr5dw_dea15eec-6442-4acb-b40a-418dddb46623/loki-distributor/0.log" Jan 27 23:12:38 crc kubenswrapper[4803]: I0127 23:12:38.402762 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8597d8df56-dkqb6_806f03eb-fc44-4b50-953e-d4101abd8bc3/opa/0.log" Jan 27 23:12:38 crc kubenswrapper[4803]: I0127 23:12:38.556617 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8597d8df56-shvtm_bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b/opa/0.log" Jan 27 23:12:38 crc kubenswrapper[4803]: I0127 23:12:38.571163 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8597d8df56-shvtm_bc7542cd-ef2e-454e-b2b7-f417dcb1ba9b/gateway/0.log" Jan 27 23:12:38 crc kubenswrapper[4803]: I0127 23:12:38.810063 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_6efa3b11-b2ea-4f6d-87d2-177229718026/loki-index-gateway/0.log" Jan 27 23:12:38 crc kubenswrapper[4803]: I0127 23:12:38.872636 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_564d57a3-4f2a-46a9-928b-b77dc685d903/loki-ingester/0.log" Jan 27 23:12:38 crc kubenswrapper[4803]: I0127 23:12:38.988356 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-q4xmw_1e455314-8336-4d0e-a611-044952db08e7/loki-querier/0.log" Jan 27 23:12:39 crc kubenswrapper[4803]: I0127 23:12:39.064387 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-bs4dm_0323234b-6aa2-41ea-bf58-a4b3924d6e4a/loki-query-frontend/0.log" Jan 27 23:12:50 crc kubenswrapper[4803]: I0127 23:12:50.306891 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:12:50 crc kubenswrapper[4803]: E0127 23:12:50.307620 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:12:55 crc kubenswrapper[4803]: I0127 23:12:55.885313 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-2nc8h_802fd9e5-a4c1-4195-b95a-e8fde55cbe1c/kube-rbac-proxy/0.log" Jan 27 23:12:56 crc kubenswrapper[4803]: I0127 23:12:56.074319 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-2nc8h_802fd9e5-a4c1-4195-b95a-e8fde55cbe1c/controller/0.log" Jan 27 23:12:56 crc kubenswrapper[4803]: I0127 23:12:56.121779 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-frr-files/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.204453 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-metrics/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.238800 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-frr-files/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.273701 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-reloader/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.283696 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-reloader/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.468078 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-frr-files/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.528363 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-reloader/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.540203 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-metrics/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.558371 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-metrics/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.718433 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-frr-files/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.760350 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-reloader/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.800210 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/controller/1.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.814365 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/cp-metrics/0.log" Jan 27 23:12:57 crc kubenswrapper[4803]: I0127 23:12:57.962419 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/controller/0.log" Jan 27 23:12:58 crc kubenswrapper[4803]: I0127 23:12:58.006320 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/frr-metrics/0.log" Jan 27 23:12:58 crc kubenswrapper[4803]: I0127 23:12:58.013185 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/frr/1.log" Jan 27 23:12:58 crc kubenswrapper[4803]: I0127 23:12:58.445143 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/kube-rbac-proxy-frr/0.log" Jan 27 23:12:58 crc kubenswrapper[4803]: I0127 23:12:58.479831 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/kube-rbac-proxy/0.log" Jan 27 23:12:58 crc kubenswrapper[4803]: I0127 23:12:58.483943 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/reloader/0.log" Jan 27 23:12:59 crc kubenswrapper[4803]: I0127 23:12:59.189084 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-tl69d_ceff729d-b83b-45b4-99ef-d11ef9570efb/frr-k8s-webhook-server/1.log" Jan 27 23:12:59 crc kubenswrapper[4803]: I0127 23:12:59.203619 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-tl69d_ceff729d-b83b-45b4-99ef-d11ef9570efb/frr-k8s-webhook-server/0.log" Jan 27 23:12:59 crc kubenswrapper[4803]: I0127 23:12:59.403212 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-848cc4d96f-sx8xb_2beb4659-d63e-495f-a32f-f94cbcbbc1ce/manager/1.log" Jan 27 23:12:59 crc kubenswrapper[4803]: I0127 23:12:59.453728 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jsxr8_0f079c02-e2f3-4dc3-aad2-86c70d3d41e8/frr/0.log" Jan 27 23:12:59 crc kubenswrapper[4803]: I0127 23:12:59.465069 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-848cc4d96f-sx8xb_2beb4659-d63e-495f-a32f-f94cbcbbc1ce/manager/0.log" Jan 27 23:12:59 crc kubenswrapper[4803]: I0127 23:12:59.636643 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86894678c6-4f29p_038e0b5a-3e3b-462b-83ca-c9865b6f4240/webhook-server/1.log" Jan 27 23:12:59 crc kubenswrapper[4803]: I0127 23:12:59.648887 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86894678c6-4f29p_038e0b5a-3e3b-462b-83ca-c9865b6f4240/webhook-server/0.log" Jan 27 23:12:59 crc kubenswrapper[4803]: I0127 23:12:59.813413 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-p9fmz_669fa453-18c2-4202-9ac3-117b6f000063/kube-rbac-proxy/0.log" Jan 27 23:13:00 crc kubenswrapper[4803]: I0127 23:13:00.204145 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-p9fmz_669fa453-18c2-4202-9ac3-117b6f000063/speaker/0.log" Jan 27 23:13:04 crc kubenswrapper[4803]: I0127 23:13:04.307519 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:13:04 crc kubenswrapper[4803]: E0127 23:13:04.310133 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:13:12 crc kubenswrapper[4803]: I0127 23:13:12.912543 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv_ef42d6f6-0acd-4bb0-aec2-a67189015527/util/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.080488 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv_ef42d6f6-0acd-4bb0-aec2-a67189015527/pull/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.123893 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv_ef42d6f6-0acd-4bb0-aec2-a67189015527/pull/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.134137 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv_ef42d6f6-0acd-4bb0-aec2-a67189015527/util/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.341590 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv_ef42d6f6-0acd-4bb0-aec2-a67189015527/pull/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.353160 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv_ef42d6f6-0acd-4bb0-aec2-a67189015527/util/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.379883 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2ht9qv_ef42d6f6-0acd-4bb0-aec2-a67189015527/extract/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.513287 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8_7026d76e-2c5e-4740-98c4-76c8f672f6c9/util/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.686365 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8_7026d76e-2c5e-4740-98c4-76c8f672f6c9/pull/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.737660 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8_7026d76e-2c5e-4740-98c4-76c8f672f6c9/util/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.743947 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8_7026d76e-2c5e-4740-98c4-76c8f672f6c9/pull/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.913708 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8_7026d76e-2c5e-4740-98c4-76c8f672f6c9/extract/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.931445 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8_7026d76e-2c5e-4740-98c4-76c8f672f6c9/pull/0.log" Jan 27 23:13:13 crc kubenswrapper[4803]: I0127 23:13:13.941541 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7wtp8_7026d76e-2c5e-4740-98c4-76c8f672f6c9/util/0.log" Jan 27 23:13:14 crc kubenswrapper[4803]: I0127 23:13:14.094773 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5_3cb26d95-4b42-4f55-921c-390f8bb5853c/util/0.log" Jan 27 23:13:14 crc kubenswrapper[4803]: I0127 23:13:14.264797 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5_3cb26d95-4b42-4f55-921c-390f8bb5853c/util/0.log" Jan 27 23:13:14 crc kubenswrapper[4803]: I0127 23:13:14.276994 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5_3cb26d95-4b42-4f55-921c-390f8bb5853c/pull/0.log" Jan 27 23:13:14 crc kubenswrapper[4803]: I0127 23:13:14.299995 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5_3cb26d95-4b42-4f55-921c-390f8bb5853c/pull/0.log" Jan 27 23:13:14 crc kubenswrapper[4803]: I0127 23:13:14.471492 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5_3cb26d95-4b42-4f55-921c-390f8bb5853c/util/0.log" Jan 27 23:13:14 crc kubenswrapper[4803]: I0127 23:13:14.500095 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5_3cb26d95-4b42-4f55-921c-390f8bb5853c/pull/0.log" Jan 27 23:13:14 crc kubenswrapper[4803]: I0127 23:13:14.504109 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bbknr5_3cb26d95-4b42-4f55-921c-390f8bb5853c/extract/0.log" Jan 27 23:13:14 crc kubenswrapper[4803]: I0127 23:13:14.638108 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl_43e3512b-91c4-4472-851f-20dffb5b2b19/util/0.log" Jan 27 23:13:14 crc kubenswrapper[4803]: I0127 23:13:14.823091 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl_43e3512b-91c4-4472-851f-20dffb5b2b19/pull/0.log" Jan 27 23:13:14 crc kubenswrapper[4803]: I0127 23:13:14.839819 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl_43e3512b-91c4-4472-851f-20dffb5b2b19/util/0.log" Jan 27 23:13:14 crc kubenswrapper[4803]: I0127 23:13:14.874744 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl_43e3512b-91c4-4472-851f-20dffb5b2b19/pull/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.013152 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl_43e3512b-91c4-4472-851f-20dffb5b2b19/pull/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.015224 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl_43e3512b-91c4-4472-851f-20dffb5b2b19/util/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.030902 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g9dkl_43e3512b-91c4-4472-851f-20dffb5b2b19/extract/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.189823 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck_3eb17edf-3450-4f70-b33e-864605aa1e6c/util/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.371681 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck_3eb17edf-3450-4f70-b33e-864605aa1e6c/util/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.376561 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck_3eb17edf-3450-4f70-b33e-864605aa1e6c/pull/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.381244 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck_3eb17edf-3450-4f70-b33e-864605aa1e6c/pull/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.542615 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck_3eb17edf-3450-4f70-b33e-864605aa1e6c/pull/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.543558 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck_3eb17edf-3450-4f70-b33e-864605aa1e6c/util/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.547959 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089wkck_3eb17edf-3450-4f70-b33e-864605aa1e6c/extract/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.719180 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9crs2_a5265b8b-6b21-4c52-be79-e6c2a2f94a1d/extract-utilities/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.894518 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9crs2_a5265b8b-6b21-4c52-be79-e6c2a2f94a1d/extract-utilities/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.930754 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9crs2_a5265b8b-6b21-4c52-be79-e6c2a2f94a1d/extract-content/0.log" Jan 27 23:13:15 crc kubenswrapper[4803]: I0127 23:13:15.931476 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9crs2_a5265b8b-6b21-4c52-be79-e6c2a2f94a1d/extract-content/0.log" Jan 27 23:13:16 crc kubenswrapper[4803]: I0127 23:13:16.143879 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9crs2_a5265b8b-6b21-4c52-be79-e6c2a2f94a1d/extract-content/0.log" Jan 27 23:13:16 crc kubenswrapper[4803]: I0127 23:13:16.194681 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9crs2_a5265b8b-6b21-4c52-be79-e6c2a2f94a1d/extract-utilities/0.log" Jan 27 23:13:16 crc kubenswrapper[4803]: I0127 23:13:16.231240 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9crs2_a5265b8b-6b21-4c52-be79-e6c2a2f94a1d/registry-server/1.log" Jan 27 23:13:16 crc kubenswrapper[4803]: I0127 23:13:16.505090 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9nds5_f28d4382-79f1-4254-a4fa-fced45178594/extract-utilities/0.log" Jan 27 23:13:16 crc kubenswrapper[4803]: I0127 23:13:16.745564 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9nds5_f28d4382-79f1-4254-a4fa-fced45178594/extract-content/0.log" Jan 27 23:13:16 crc kubenswrapper[4803]: I0127 23:13:16.758289 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9nds5_f28d4382-79f1-4254-a4fa-fced45178594/extract-utilities/0.log" Jan 27 23:13:16 crc kubenswrapper[4803]: I0127 23:13:16.785194 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9nds5_f28d4382-79f1-4254-a4fa-fced45178594/extract-content/0.log" Jan 27 23:13:16 crc kubenswrapper[4803]: I0127 23:13:16.953369 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9crs2_a5265b8b-6b21-4c52-be79-e6c2a2f94a1d/registry-server/0.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.004678 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9nds5_f28d4382-79f1-4254-a4fa-fced45178594/extract-utilities/0.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.050284 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9nds5_f28d4382-79f1-4254-a4fa-fced45178594/extract-content/0.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.193694 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9nds5_f28d4382-79f1-4254-a4fa-fced45178594/registry-server/1.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.208427 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vlj5d_2b1c25f0-10e5-41a3-81ca-aef5372a4d38/marketplace-operator/1.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.341119 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vlj5d_2b1c25f0-10e5-41a3-81ca-aef5372a4d38/marketplace-operator/0.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.467336 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hg2h2_d6e32da0-91ce-49f6-8f4e-928b9fee6fdf/extract-utilities/0.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.621966 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hg2h2_d6e32da0-91ce-49f6-8f4e-928b9fee6fdf/extract-content/0.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.686884 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hg2h2_d6e32da0-91ce-49f6-8f4e-928b9fee6fdf/extract-utilities/0.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.760280 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hg2h2_d6e32da0-91ce-49f6-8f4e-928b9fee6fdf/extract-content/0.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.807713 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9nds5_f28d4382-79f1-4254-a4fa-fced45178594/registry-server/0.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.889921 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hg2h2_d6e32da0-91ce-49f6-8f4e-928b9fee6fdf/extract-utilities/0.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.918494 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hg2h2_d6e32da0-91ce-49f6-8f4e-928b9fee6fdf/extract-content/0.log" Jan 27 23:13:17 crc kubenswrapper[4803]: I0127 23:13:17.942157 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hg2h2_d6e32da0-91ce-49f6-8f4e-928b9fee6fdf/registry-server/1.log" Jan 27 23:13:18 crc kubenswrapper[4803]: I0127 23:13:18.006079 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hg2h2_d6e32da0-91ce-49f6-8f4e-928b9fee6fdf/registry-server/0.log" Jan 27 23:13:18 crc kubenswrapper[4803]: I0127 23:13:18.070077 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cwt95_1088c904-bd11-410d-963b-91425f9e2ee1/extract-utilities/0.log" Jan 27 23:13:18 crc kubenswrapper[4803]: I0127 23:13:18.257772 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cwt95_1088c904-bd11-410d-963b-91425f9e2ee1/extract-content/0.log" Jan 27 23:13:18 crc kubenswrapper[4803]: I0127 23:13:18.270230 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cwt95_1088c904-bd11-410d-963b-91425f9e2ee1/extract-content/0.log" Jan 27 23:13:18 crc kubenswrapper[4803]: I0127 23:13:18.277746 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cwt95_1088c904-bd11-410d-963b-91425f9e2ee1/extract-utilities/0.log" Jan 27 23:13:18 crc kubenswrapper[4803]: I0127 23:13:18.433202 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cwt95_1088c904-bd11-410d-963b-91425f9e2ee1/extract-utilities/0.log" Jan 27 23:13:18 crc kubenswrapper[4803]: I0127 23:13:18.433265 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cwt95_1088c904-bd11-410d-963b-91425f9e2ee1/extract-content/0.log" Jan 27 23:13:18 crc kubenswrapper[4803]: I0127 23:13:18.725450 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cwt95_1088c904-bd11-410d-963b-91425f9e2ee1/registry-server/0.log" Jan 27 23:13:19 crc kubenswrapper[4803]: I0127 23:13:19.307160 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:13:19 crc kubenswrapper[4803]: E0127 23:13:19.308111 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:13:31 crc kubenswrapper[4803]: I0127 23:13:31.307499 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:13:31 crc kubenswrapper[4803]: E0127 23:13:31.309133 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:13:32 crc kubenswrapper[4803]: I0127 23:13:32.850433 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ff8978574-v75wv_eed68546-4e6f-4551-95ab-7e870b098179/prometheus-operator-admission-webhook/0.log" Jan 27 23:13:32 crc kubenswrapper[4803]: I0127 23:13:32.887886 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ff8978574-mvvx7_48ffb065-6bf7-4b9c-981e-f834ead82767/prometheus-operator-admission-webhook/0.log" Jan 27 23:13:32 crc kubenswrapper[4803]: I0127 23:13:32.904795 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qtnmg_67bbe061-3ab2-43cf-9579-900c0ff65da9/prometheus-operator/0.log" Jan 27 23:13:33 crc kubenswrapper[4803]: I0127 23:13:33.036602 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-skn2q_69126409-4642-4d42-855d-e7325b3de7c5/operator/1.log" Jan 27 23:13:33 crc kubenswrapper[4803]: I0127 23:13:33.087959 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-zj24g_7dbfecf3-a077-4d96-b7d5-d81b1c744194/observability-ui-dashboards/0.log" Jan 27 23:13:33 crc kubenswrapper[4803]: I0127 23:13:33.090855 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-skn2q_69126409-4642-4d42-855d-e7325b3de7c5/operator/0.log" Jan 27 23:13:33 crc kubenswrapper[4803]: I0127 23:13:33.108369 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-nfxjq_5b3c1908-cc42-4af3-a73d-916466d38dd6/perses-operator/0.log" Jan 27 23:13:42 crc kubenswrapper[4803]: I0127 23:13:42.307421 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:13:42 crc kubenswrapper[4803]: E0127 23:13:42.308191 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:13:46 crc kubenswrapper[4803]: I0127 23:13:46.370274 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b65d5f66c-f2bd5_51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d/kube-rbac-proxy/0.log" Jan 27 23:13:46 crc kubenswrapper[4803]: I0127 23:13:46.405151 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b65d5f66c-f2bd5_51ba4ac9-8ab7-4c28-83fe-6a3fbe40025d/manager/0.log" Jan 27 23:13:54 crc kubenswrapper[4803]: I0127 23:13:54.308671 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:13:54 crc kubenswrapper[4803]: E0127 23:13:54.309535 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:14:06 crc kubenswrapper[4803]: I0127 23:14:06.307413 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:14:06 crc kubenswrapper[4803]: E0127 23:14:06.308171 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:14:18 crc kubenswrapper[4803]: I0127 23:14:18.318371 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:14:18 crc kubenswrapper[4803]: E0127 23:14:18.320997 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:14:29 crc kubenswrapper[4803]: I0127 23:14:29.306730 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:14:29 crc kubenswrapper[4803]: E0127 23:14:29.307537 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:14:44 crc kubenswrapper[4803]: I0127 23:14:44.306999 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:14:44 crc kubenswrapper[4803]: E0127 23:14:44.307920 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:14:59 crc kubenswrapper[4803]: I0127 23:14:59.309026 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:14:59 crc kubenswrapper[4803]: E0127 23:14:59.310615 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.275389 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7"] Jan 27 23:15:00 crc kubenswrapper[4803]: E0127 23:15:00.275955 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a557703f-e168-45a1-b090-b9cdb114e0e1" containerName="container-00" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.275969 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a557703f-e168-45a1-b090-b9cdb114e0e1" containerName="container-00" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.276245 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a557703f-e168-45a1-b090-b9cdb114e0e1" containerName="container-00" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.278756 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.286290 4803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.289241 4803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.346672 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7"] Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.407401 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4423455-a376-41bf-a48c-026ae318916f-config-volume\") pod \"collect-profiles-29492595-dx8t7\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.408318 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4423455-a376-41bf-a48c-026ae318916f-secret-volume\") pod \"collect-profiles-29492595-dx8t7\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.408400 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg259\" (UniqueName: \"kubernetes.io/projected/a4423455-a376-41bf-a48c-026ae318916f-kube-api-access-tg259\") pod \"collect-profiles-29492595-dx8t7\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.510510 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4423455-a376-41bf-a48c-026ae318916f-config-volume\") pod \"collect-profiles-29492595-dx8t7\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.510834 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4423455-a376-41bf-a48c-026ae318916f-secret-volume\") pod \"collect-profiles-29492595-dx8t7\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.510894 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg259\" (UniqueName: \"kubernetes.io/projected/a4423455-a376-41bf-a48c-026ae318916f-kube-api-access-tg259\") pod \"collect-profiles-29492595-dx8t7\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.511690 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4423455-a376-41bf-a48c-026ae318916f-config-volume\") pod \"collect-profiles-29492595-dx8t7\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.533928 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg259\" (UniqueName: \"kubernetes.io/projected/a4423455-a376-41bf-a48c-026ae318916f-kube-api-access-tg259\") pod \"collect-profiles-29492595-dx8t7\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.538682 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4423455-a376-41bf-a48c-026ae318916f-secret-volume\") pod \"collect-profiles-29492595-dx8t7\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:00 crc kubenswrapper[4803]: I0127 23:15:00.601068 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:01 crc kubenswrapper[4803]: I0127 23:15:01.601645 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7"] Jan 27 23:15:02 crc kubenswrapper[4803]: I0127 23:15:02.235730 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" event={"ID":"a4423455-a376-41bf-a48c-026ae318916f","Type":"ContainerStarted","Data":"0ca5012cf52d548d9a11ed673328b18d99904fd09d44e62a8983933584383ee0"} Jan 27 23:15:02 crc kubenswrapper[4803]: I0127 23:15:02.236179 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" event={"ID":"a4423455-a376-41bf-a48c-026ae318916f","Type":"ContainerStarted","Data":"b132ffe5a6e567d9ce25f1393d7d95c0c414c07bc63ea16fe7505fcef231c4bc"} Jan 27 23:15:02 crc kubenswrapper[4803]: I0127 23:15:02.255641 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" podStartSLOduration=2.255621691 podStartE2EDuration="2.255621691s" podCreationTimestamp="2026-01-27 23:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 23:15:02.252320562 +0000 UTC m=+5254.668342281" watchObservedRunningTime="2026-01-27 23:15:02.255621691 +0000 UTC m=+5254.671643390" Jan 27 23:15:03 crc kubenswrapper[4803]: I0127 23:15:03.251118 4803 generic.go:334] "Generic (PLEG): container finished" podID="a4423455-a376-41bf-a48c-026ae318916f" containerID="0ca5012cf52d548d9a11ed673328b18d99904fd09d44e62a8983933584383ee0" exitCode=0 Jan 27 23:15:03 crc kubenswrapper[4803]: I0127 23:15:03.251519 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" event={"ID":"a4423455-a376-41bf-a48c-026ae318916f","Type":"ContainerDied","Data":"0ca5012cf52d548d9a11ed673328b18d99904fd09d44e62a8983933584383ee0"} Jan 27 23:15:04 crc kubenswrapper[4803]: I0127 23:15:04.707586 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:04 crc kubenswrapper[4803]: I0127 23:15:04.751700 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4423455-a376-41bf-a48c-026ae318916f-secret-volume\") pod \"a4423455-a376-41bf-a48c-026ae318916f\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " Jan 27 23:15:04 crc kubenswrapper[4803]: I0127 23:15:04.752112 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4423455-a376-41bf-a48c-026ae318916f-config-volume\") pod \"a4423455-a376-41bf-a48c-026ae318916f\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " Jan 27 23:15:04 crc kubenswrapper[4803]: I0127 23:15:04.752606 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg259\" (UniqueName: \"kubernetes.io/projected/a4423455-a376-41bf-a48c-026ae318916f-kube-api-access-tg259\") pod \"a4423455-a376-41bf-a48c-026ae318916f\" (UID: \"a4423455-a376-41bf-a48c-026ae318916f\") " Jan 27 23:15:04 crc kubenswrapper[4803]: I0127 23:15:04.754641 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4423455-a376-41bf-a48c-026ae318916f-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4423455-a376-41bf-a48c-026ae318916f" (UID: "a4423455-a376-41bf-a48c-026ae318916f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 23:15:04 crc kubenswrapper[4803]: I0127 23:15:04.760870 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4423455-a376-41bf-a48c-026ae318916f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4423455-a376-41bf-a48c-026ae318916f" (UID: "a4423455-a376-41bf-a48c-026ae318916f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 23:15:04 crc kubenswrapper[4803]: I0127 23:15:04.764333 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4423455-a376-41bf-a48c-026ae318916f-kube-api-access-tg259" (OuterVolumeSpecName: "kube-api-access-tg259") pod "a4423455-a376-41bf-a48c-026ae318916f" (UID: "a4423455-a376-41bf-a48c-026ae318916f"). InnerVolumeSpecName "kube-api-access-tg259". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:15:04 crc kubenswrapper[4803]: I0127 23:15:04.868787 4803 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4423455-a376-41bf-a48c-026ae318916f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 23:15:04 crc kubenswrapper[4803]: I0127 23:15:04.868832 4803 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4423455-a376-41bf-a48c-026ae318916f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 23:15:04 crc kubenswrapper[4803]: I0127 23:15:04.868892 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg259\" (UniqueName: \"kubernetes.io/projected/a4423455-a376-41bf-a48c-026ae318916f-kube-api-access-tg259\") on node \"crc\" DevicePath \"\"" Jan 27 23:15:05 crc kubenswrapper[4803]: I0127 23:15:05.279342 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" event={"ID":"a4423455-a376-41bf-a48c-026ae318916f","Type":"ContainerDied","Data":"b132ffe5a6e567d9ce25f1393d7d95c0c414c07bc63ea16fe7505fcef231c4bc"} Jan 27 23:15:05 crc kubenswrapper[4803]: I0127 23:15:05.279413 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492595-dx8t7" Jan 27 23:15:05 crc kubenswrapper[4803]: I0127 23:15:05.280240 4803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b132ffe5a6e567d9ce25f1393d7d95c0c414c07bc63ea16fe7505fcef231c4bc" Jan 27 23:15:05 crc kubenswrapper[4803]: I0127 23:15:05.791315 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2"] Jan 27 23:15:05 crc kubenswrapper[4803]: I0127 23:15:05.801589 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492550-sbkn2"] Jan 27 23:15:06 crc kubenswrapper[4803]: I0127 23:15:06.319193 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1775032d-620d-4e75-808b-eef53841271a" path="/var/lib/kubelet/pods/1775032d-620d-4e75-808b-eef53841271a/volumes" Jan 27 23:15:14 crc kubenswrapper[4803]: I0127 23:15:14.308300 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:15:14 crc kubenswrapper[4803]: E0127 23:15:14.309477 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:15:25 crc kubenswrapper[4803]: I0127 23:15:25.306584 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:15:25 crc kubenswrapper[4803]: E0127 23:15:25.307689 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:15:36 crc kubenswrapper[4803]: I0127 23:15:36.900937 4803 scope.go:117] "RemoveContainer" containerID="f99baa81b45bb177cf58ad26fe1328c949599626ea71140cc1e0ec92e9d4d4ac" Jan 27 23:15:37 crc kubenswrapper[4803]: I0127 23:15:37.307471 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:15:37 crc kubenswrapper[4803]: E0127 23:15:37.307896 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:15:41 crc kubenswrapper[4803]: I0127 23:15:41.723402 4803 generic.go:334] "Generic (PLEG): container finished" podID="a4215496-c9dc-41d2-a133-042eb98a0820" containerID="f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06" exitCode=0 Jan 27 23:15:41 crc kubenswrapper[4803]: I0127 23:15:41.723558 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7kj6c/must-gather-f8th5" event={"ID":"a4215496-c9dc-41d2-a133-042eb98a0820","Type":"ContainerDied","Data":"f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06"} Jan 27 23:15:41 crc kubenswrapper[4803]: I0127 23:15:41.724924 4803 scope.go:117] "RemoveContainer" containerID="f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06" Jan 27 23:15:42 crc kubenswrapper[4803]: I0127 23:15:42.454967 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7kj6c_must-gather-f8th5_a4215496-c9dc-41d2-a133-042eb98a0820/gather/0.log" Jan 27 23:15:50 crc kubenswrapper[4803]: I0127 23:15:50.922387 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7kj6c/must-gather-f8th5"] Jan 27 23:15:50 crc kubenswrapper[4803]: I0127 23:15:50.923837 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7kj6c/must-gather-f8th5" podUID="a4215496-c9dc-41d2-a133-042eb98a0820" containerName="copy" containerID="cri-o://2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f" gracePeriod=2 Jan 27 23:15:50 crc kubenswrapper[4803]: I0127 23:15:50.935541 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7kj6c/must-gather-f8th5"] Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.402283 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7kj6c_must-gather-f8th5_a4215496-c9dc-41d2-a133-042eb98a0820/copy/0.log" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.403028 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/must-gather-f8th5" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.453954 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4215496-c9dc-41d2-a133-042eb98a0820-must-gather-output\") pod \"a4215496-c9dc-41d2-a133-042eb98a0820\" (UID: \"a4215496-c9dc-41d2-a133-042eb98a0820\") " Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.454094 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmfs\" (UniqueName: \"kubernetes.io/projected/a4215496-c9dc-41d2-a133-042eb98a0820-kube-api-access-nmmfs\") pod \"a4215496-c9dc-41d2-a133-042eb98a0820\" (UID: \"a4215496-c9dc-41d2-a133-042eb98a0820\") " Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.459228 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4215496-c9dc-41d2-a133-042eb98a0820-kube-api-access-nmmfs" (OuterVolumeSpecName: "kube-api-access-nmmfs") pod "a4215496-c9dc-41d2-a133-042eb98a0820" (UID: "a4215496-c9dc-41d2-a133-042eb98a0820"). InnerVolumeSpecName "kube-api-access-nmmfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.557960 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmmfs\" (UniqueName: \"kubernetes.io/projected/a4215496-c9dc-41d2-a133-042eb98a0820-kube-api-access-nmmfs\") on node \"crc\" DevicePath \"\"" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.628898 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4215496-c9dc-41d2-a133-042eb98a0820-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a4215496-c9dc-41d2-a133-042eb98a0820" (UID: "a4215496-c9dc-41d2-a133-042eb98a0820"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.660818 4803 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4215496-c9dc-41d2-a133-042eb98a0820-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.831389 4803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7kj6c_must-gather-f8th5_a4215496-c9dc-41d2-a133-042eb98a0820/copy/0.log" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.831889 4803 generic.go:334] "Generic (PLEG): container finished" podID="a4215496-c9dc-41d2-a133-042eb98a0820" containerID="2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f" exitCode=143 Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.831948 4803 scope.go:117] "RemoveContainer" containerID="2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.832032 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7kj6c/must-gather-f8th5" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.856287 4803 scope.go:117] "RemoveContainer" containerID="f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.929779 4803 scope.go:117] "RemoveContainer" containerID="2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f" Jan 27 23:15:51 crc kubenswrapper[4803]: E0127 23:15:51.930632 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f\": container with ID starting with 2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f not found: ID does not exist" containerID="2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.930661 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f"} err="failed to get container status \"2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f\": rpc error: code = NotFound desc = could not find container \"2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f\": container with ID starting with 2c66a5d4955f9b5ad397e3ef020d799a6bace9b837bf3ec741d14edb9855175f not found: ID does not exist" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.930682 4803 scope.go:117] "RemoveContainer" containerID="f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06" Jan 27 23:15:51 crc kubenswrapper[4803]: E0127 23:15:51.930999 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06\": container with ID starting with f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06 not found: ID does not exist" containerID="f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06" Jan 27 23:15:51 crc kubenswrapper[4803]: I0127 23:15:51.931050 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06"} err="failed to get container status \"f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06\": rpc error: code = NotFound desc = could not find container \"f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06\": container with ID starting with f1350467c47785b25c76a50830fe250135a74a798ac097c54c8949e4d22d5f06 not found: ID does not exist" Jan 27 23:15:52 crc kubenswrapper[4803]: I0127 23:15:52.307337 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:15:52 crc kubenswrapper[4803]: I0127 23:15:52.320466 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4215496-c9dc-41d2-a133-042eb98a0820" path="/var/lib/kubelet/pods/a4215496-c9dc-41d2-a133-042eb98a0820/volumes" Jan 27 23:15:52 crc kubenswrapper[4803]: I0127 23:15:52.846980 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"f1fd79617cecafa0e5a6b165ba06fdf6bea7229ea8721f4bf879baf531e446c8"} Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.257734 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dbkfb"] Jan 27 23:16:47 crc kubenswrapper[4803]: E0127 23:16:47.258658 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4215496-c9dc-41d2-a133-042eb98a0820" containerName="gather" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.258671 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4215496-c9dc-41d2-a133-042eb98a0820" containerName="gather" Jan 27 23:16:47 crc kubenswrapper[4803]: E0127 23:16:47.258704 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4215496-c9dc-41d2-a133-042eb98a0820" containerName="copy" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.258710 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4215496-c9dc-41d2-a133-042eb98a0820" containerName="copy" Jan 27 23:16:47 crc kubenswrapper[4803]: E0127 23:16:47.258718 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4423455-a376-41bf-a48c-026ae318916f" containerName="collect-profiles" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.258724 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4423455-a376-41bf-a48c-026ae318916f" containerName="collect-profiles" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.258998 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4423455-a376-41bf-a48c-026ae318916f" containerName="collect-profiles" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.259021 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4215496-c9dc-41d2-a133-042eb98a0820" containerName="copy" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.259034 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4215496-c9dc-41d2-a133-042eb98a0820" containerName="gather" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.261356 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.284488 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbkfb"] Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.361608 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-utilities\") pod \"redhat-marketplace-dbkfb\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.361917 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-catalog-content\") pod \"redhat-marketplace-dbkfb\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.362122 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54bd\" (UniqueName: \"kubernetes.io/projected/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-kube-api-access-x54bd\") pod \"redhat-marketplace-dbkfb\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.466005 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-utilities\") pod \"redhat-marketplace-dbkfb\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.466183 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-catalog-content\") pod \"redhat-marketplace-dbkfb\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.466268 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x54bd\" (UniqueName: \"kubernetes.io/projected/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-kube-api-access-x54bd\") pod \"redhat-marketplace-dbkfb\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.467163 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-utilities\") pod \"redhat-marketplace-dbkfb\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.467305 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-catalog-content\") pod \"redhat-marketplace-dbkfb\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.494097 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x54bd\" (UniqueName: \"kubernetes.io/projected/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-kube-api-access-x54bd\") pod \"redhat-marketplace-dbkfb\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:47 crc kubenswrapper[4803]: I0127 23:16:47.616200 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:48 crc kubenswrapper[4803]: I0127 23:16:48.117201 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbkfb"] Jan 27 23:16:48 crc kubenswrapper[4803]: W0127 23:16:48.117903 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79ef7dc5_7b4f_4942_9dec_c3554d67e87f.slice/crio-5039ca97ff19cb29812c2be4a3768249bc8b9682a1b148babadb3bfe69e84e59 WatchSource:0}: Error finding container 5039ca97ff19cb29812c2be4a3768249bc8b9682a1b148babadb3bfe69e84e59: Status 404 returned error can't find the container with id 5039ca97ff19cb29812c2be4a3768249bc8b9682a1b148babadb3bfe69e84e59 Jan 27 23:16:48 crc kubenswrapper[4803]: I0127 23:16:48.593221 4803 generic.go:334] "Generic (PLEG): container finished" podID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" containerID="6de181976c095a3649c9e7e9aeaa82170b93b3e0bf349bddfad491040f1dae9e" exitCode=0 Jan 27 23:16:48 crc kubenswrapper[4803]: I0127 23:16:48.593278 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbkfb" event={"ID":"79ef7dc5-7b4f-4942-9dec-c3554d67e87f","Type":"ContainerDied","Data":"6de181976c095a3649c9e7e9aeaa82170b93b3e0bf349bddfad491040f1dae9e"} Jan 27 23:16:48 crc kubenswrapper[4803]: I0127 23:16:48.593749 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbkfb" event={"ID":"79ef7dc5-7b4f-4942-9dec-c3554d67e87f","Type":"ContainerStarted","Data":"5039ca97ff19cb29812c2be4a3768249bc8b9682a1b148babadb3bfe69e84e59"} Jan 27 23:16:48 crc kubenswrapper[4803]: I0127 23:16:48.604537 4803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 23:16:50 crc kubenswrapper[4803]: I0127 23:16:50.614596 4803 generic.go:334] "Generic (PLEG): container finished" podID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" containerID="7f4f1f0b27a7fa69f8245fec9181fca3fe507458c92e13da824805771bc155cc" exitCode=0 Jan 27 23:16:50 crc kubenswrapper[4803]: I0127 23:16:50.614658 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbkfb" event={"ID":"79ef7dc5-7b4f-4942-9dec-c3554d67e87f","Type":"ContainerDied","Data":"7f4f1f0b27a7fa69f8245fec9181fca3fe507458c92e13da824805771bc155cc"} Jan 27 23:16:51 crc kubenswrapper[4803]: I0127 23:16:51.629409 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbkfb" event={"ID":"79ef7dc5-7b4f-4942-9dec-c3554d67e87f","Type":"ContainerStarted","Data":"c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b"} Jan 27 23:16:51 crc kubenswrapper[4803]: I0127 23:16:51.657134 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dbkfb" podStartSLOduration=2.2266076200000002 podStartE2EDuration="4.657108416s" podCreationTimestamp="2026-01-27 23:16:47 +0000 UTC" firstStartedPulling="2026-01-27 23:16:48.595260701 +0000 UTC m=+5361.011282430" lastFinishedPulling="2026-01-27 23:16:51.025761537 +0000 UTC m=+5363.441783226" observedRunningTime="2026-01-27 23:16:51.650631591 +0000 UTC m=+5364.066653310" watchObservedRunningTime="2026-01-27 23:16:51.657108416 +0000 UTC m=+5364.073130125" Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.708339 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4zf8v"] Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.710997 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.749369 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zf8v"] Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.833653 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-catalog-content\") pod \"community-operators-4zf8v\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.834146 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxfd2\" (UniqueName: \"kubernetes.io/projected/03c94572-33f4-44b9-aff8-c472ea94f19f-kube-api-access-jxfd2\") pod \"community-operators-4zf8v\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.834342 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-utilities\") pod \"community-operators-4zf8v\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.936123 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxfd2\" (UniqueName: \"kubernetes.io/projected/03c94572-33f4-44b9-aff8-c472ea94f19f-kube-api-access-jxfd2\") pod \"community-operators-4zf8v\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.936356 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-utilities\") pod \"community-operators-4zf8v\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.936407 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-catalog-content\") pod \"community-operators-4zf8v\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.936892 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-utilities\") pod \"community-operators-4zf8v\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.936989 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-catalog-content\") pod \"community-operators-4zf8v\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:16:54 crc kubenswrapper[4803]: I0127 23:16:54.963154 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxfd2\" (UniqueName: \"kubernetes.io/projected/03c94572-33f4-44b9-aff8-c472ea94f19f-kube-api-access-jxfd2\") pod \"community-operators-4zf8v\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:16:55 crc kubenswrapper[4803]: I0127 23:16:55.033773 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:16:55 crc kubenswrapper[4803]: I0127 23:16:55.791063 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zf8v"] Jan 27 23:16:56 crc kubenswrapper[4803]: I0127 23:16:56.691181 4803 generic.go:334] "Generic (PLEG): container finished" podID="03c94572-33f4-44b9-aff8-c472ea94f19f" containerID="5f5709c6fb3d372e7ee7c4c70a7de78153b34489a0f5fd6c7157cb2c1b8f67e6" exitCode=0 Jan 27 23:16:56 crc kubenswrapper[4803]: I0127 23:16:56.691268 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zf8v" event={"ID":"03c94572-33f4-44b9-aff8-c472ea94f19f","Type":"ContainerDied","Data":"5f5709c6fb3d372e7ee7c4c70a7de78153b34489a0f5fd6c7157cb2c1b8f67e6"} Jan 27 23:16:56 crc kubenswrapper[4803]: I0127 23:16:56.691771 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zf8v" event={"ID":"03c94572-33f4-44b9-aff8-c472ea94f19f","Type":"ContainerStarted","Data":"ec8755254e4ad5a3270424e8f4b2fdea384b39973ac2e9f1622f1f3f6ba91355"} Jan 27 23:16:57 crc kubenswrapper[4803]: I0127 23:16:57.617267 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:57 crc kubenswrapper[4803]: I0127 23:16:57.617737 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:57 crc kubenswrapper[4803]: I0127 23:16:57.689509 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:57 crc kubenswrapper[4803]: I0127 23:16:57.704035 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zf8v" event={"ID":"03c94572-33f4-44b9-aff8-c472ea94f19f","Type":"ContainerStarted","Data":"8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c"} Jan 27 23:16:57 crc kubenswrapper[4803]: I0127 23:16:57.764678 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:16:59 crc kubenswrapper[4803]: I0127 23:16:59.727385 4803 generic.go:334] "Generic (PLEG): container finished" podID="03c94572-33f4-44b9-aff8-c472ea94f19f" containerID="8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c" exitCode=0 Jan 27 23:16:59 crc kubenswrapper[4803]: I0127 23:16:59.727428 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zf8v" event={"ID":"03c94572-33f4-44b9-aff8-c472ea94f19f","Type":"ContainerDied","Data":"8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c"} Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.086116 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbkfb"] Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.086351 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dbkfb" podUID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" containerName="registry-server" containerID="cri-o://c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b" gracePeriod=2 Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.687690 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.740860 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zf8v" event={"ID":"03c94572-33f4-44b9-aff8-c472ea94f19f","Type":"ContainerStarted","Data":"da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c"} Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.745186 4803 generic.go:334] "Generic (PLEG): container finished" podID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" containerID="c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b" exitCode=0 Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.745234 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbkfb" event={"ID":"79ef7dc5-7b4f-4942-9dec-c3554d67e87f","Type":"ContainerDied","Data":"c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b"} Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.745265 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbkfb" event={"ID":"79ef7dc5-7b4f-4942-9dec-c3554d67e87f","Type":"ContainerDied","Data":"5039ca97ff19cb29812c2be4a3768249bc8b9682a1b148babadb3bfe69e84e59"} Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.745273 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbkfb" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.745284 4803 scope.go:117] "RemoveContainer" containerID="c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.762494 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4zf8v" podStartSLOduration=3.301122775 podStartE2EDuration="6.762474175s" podCreationTimestamp="2026-01-27 23:16:54 +0000 UTC" firstStartedPulling="2026-01-27 23:16:56.694017511 +0000 UTC m=+5369.110039210" lastFinishedPulling="2026-01-27 23:17:00.155368901 +0000 UTC m=+5372.571390610" observedRunningTime="2026-01-27 23:17:00.757204482 +0000 UTC m=+5373.173226201" watchObservedRunningTime="2026-01-27 23:17:00.762474175 +0000 UTC m=+5373.178495874" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.769117 4803 scope.go:117] "RemoveContainer" containerID="7f4f1f0b27a7fa69f8245fec9181fca3fe507458c92e13da824805771bc155cc" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.795005 4803 scope.go:117] "RemoveContainer" containerID="6de181976c095a3649c9e7e9aeaa82170b93b3e0bf349bddfad491040f1dae9e" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.850143 4803 scope.go:117] "RemoveContainer" containerID="c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b" Jan 27 23:17:00 crc kubenswrapper[4803]: E0127 23:17:00.850607 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b\": container with ID starting with c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b not found: ID does not exist" containerID="c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.850680 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b"} err="failed to get container status \"c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b\": rpc error: code = NotFound desc = could not find container \"c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b\": container with ID starting with c3abb85509f6c2565dd15df1c26a6d165d30de6556e7e6a2596b5822b9b4294b not found: ID does not exist" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.850712 4803 scope.go:117] "RemoveContainer" containerID="7f4f1f0b27a7fa69f8245fec9181fca3fe507458c92e13da824805771bc155cc" Jan 27 23:17:00 crc kubenswrapper[4803]: E0127 23:17:00.851342 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f4f1f0b27a7fa69f8245fec9181fca3fe507458c92e13da824805771bc155cc\": container with ID starting with 7f4f1f0b27a7fa69f8245fec9181fca3fe507458c92e13da824805771bc155cc not found: ID does not exist" containerID="7f4f1f0b27a7fa69f8245fec9181fca3fe507458c92e13da824805771bc155cc" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.851397 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f4f1f0b27a7fa69f8245fec9181fca3fe507458c92e13da824805771bc155cc"} err="failed to get container status \"7f4f1f0b27a7fa69f8245fec9181fca3fe507458c92e13da824805771bc155cc\": rpc error: code = NotFound desc = could not find container \"7f4f1f0b27a7fa69f8245fec9181fca3fe507458c92e13da824805771bc155cc\": container with ID starting with 7f4f1f0b27a7fa69f8245fec9181fca3fe507458c92e13da824805771bc155cc not found: ID does not exist" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.851429 4803 scope.go:117] "RemoveContainer" containerID="6de181976c095a3649c9e7e9aeaa82170b93b3e0bf349bddfad491040f1dae9e" Jan 27 23:17:00 crc kubenswrapper[4803]: E0127 23:17:00.851748 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de181976c095a3649c9e7e9aeaa82170b93b3e0bf349bddfad491040f1dae9e\": container with ID starting with 6de181976c095a3649c9e7e9aeaa82170b93b3e0bf349bddfad491040f1dae9e not found: ID does not exist" containerID="6de181976c095a3649c9e7e9aeaa82170b93b3e0bf349bddfad491040f1dae9e" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.851796 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de181976c095a3649c9e7e9aeaa82170b93b3e0bf349bddfad491040f1dae9e"} err="failed to get container status \"6de181976c095a3649c9e7e9aeaa82170b93b3e0bf349bddfad491040f1dae9e\": rpc error: code = NotFound desc = could not find container \"6de181976c095a3649c9e7e9aeaa82170b93b3e0bf349bddfad491040f1dae9e\": container with ID starting with 6de181976c095a3649c9e7e9aeaa82170b93b3e0bf349bddfad491040f1dae9e not found: ID does not exist" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.859160 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-catalog-content\") pod \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.859241 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-utilities\") pod \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.859472 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x54bd\" (UniqueName: \"kubernetes.io/projected/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-kube-api-access-x54bd\") pod \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\" (UID: \"79ef7dc5-7b4f-4942-9dec-c3554d67e87f\") " Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.860112 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-utilities" (OuterVolumeSpecName: "utilities") pod "79ef7dc5-7b4f-4942-9dec-c3554d67e87f" (UID: "79ef7dc5-7b4f-4942-9dec-c3554d67e87f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.866535 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-kube-api-access-x54bd" (OuterVolumeSpecName: "kube-api-access-x54bd") pod "79ef7dc5-7b4f-4942-9dec-c3554d67e87f" (UID: "79ef7dc5-7b4f-4942-9dec-c3554d67e87f"). InnerVolumeSpecName "kube-api-access-x54bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.881524 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79ef7dc5-7b4f-4942-9dec-c3554d67e87f" (UID: "79ef7dc5-7b4f-4942-9dec-c3554d67e87f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.962769 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.962801 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 23:17:00 crc kubenswrapper[4803]: I0127 23:17:00.962812 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x54bd\" (UniqueName: \"kubernetes.io/projected/79ef7dc5-7b4f-4942-9dec-c3554d67e87f-kube-api-access-x54bd\") on node \"crc\" DevicePath \"\"" Jan 27 23:17:01 crc kubenswrapper[4803]: I0127 23:17:01.084411 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbkfb"] Jan 27 23:17:01 crc kubenswrapper[4803]: I0127 23:17:01.094908 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbkfb"] Jan 27 23:17:02 crc kubenswrapper[4803]: I0127 23:17:02.321713 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" path="/var/lib/kubelet/pods/79ef7dc5-7b4f-4942-9dec-c3554d67e87f/volumes" Jan 27 23:17:05 crc kubenswrapper[4803]: I0127 23:17:05.034697 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:17:05 crc kubenswrapper[4803]: I0127 23:17:05.035177 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:17:05 crc kubenswrapper[4803]: I0127 23:17:05.132196 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:17:05 crc kubenswrapper[4803]: I0127 23:17:05.866675 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:17:06 crc kubenswrapper[4803]: I0127 23:17:06.288276 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4zf8v"] Jan 27 23:17:07 crc kubenswrapper[4803]: I0127 23:17:07.820942 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4zf8v" podUID="03c94572-33f4-44b9-aff8-c472ea94f19f" containerName="registry-server" containerID="cri-o://da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c" gracePeriod=2 Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.481668 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.568629 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-utilities\") pod \"03c94572-33f4-44b9-aff8-c472ea94f19f\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.568716 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxfd2\" (UniqueName: \"kubernetes.io/projected/03c94572-33f4-44b9-aff8-c472ea94f19f-kube-api-access-jxfd2\") pod \"03c94572-33f4-44b9-aff8-c472ea94f19f\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.568786 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-catalog-content\") pod \"03c94572-33f4-44b9-aff8-c472ea94f19f\" (UID: \"03c94572-33f4-44b9-aff8-c472ea94f19f\") " Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.569679 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-utilities" (OuterVolumeSpecName: "utilities") pod "03c94572-33f4-44b9-aff8-c472ea94f19f" (UID: "03c94572-33f4-44b9-aff8-c472ea94f19f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.575876 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c94572-33f4-44b9-aff8-c472ea94f19f-kube-api-access-jxfd2" (OuterVolumeSpecName: "kube-api-access-jxfd2") pod "03c94572-33f4-44b9-aff8-c472ea94f19f" (UID: "03c94572-33f4-44b9-aff8-c472ea94f19f"). InnerVolumeSpecName "kube-api-access-jxfd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.643902 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03c94572-33f4-44b9-aff8-c472ea94f19f" (UID: "03c94572-33f4-44b9-aff8-c472ea94f19f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.672646 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.672684 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxfd2\" (UniqueName: \"kubernetes.io/projected/03c94572-33f4-44b9-aff8-c472ea94f19f-kube-api-access-jxfd2\") on node \"crc\" DevicePath \"\"" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.672701 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c94572-33f4-44b9-aff8-c472ea94f19f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.835742 4803 generic.go:334] "Generic (PLEG): container finished" podID="03c94572-33f4-44b9-aff8-c472ea94f19f" containerID="da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c" exitCode=0 Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.835810 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zf8v" event={"ID":"03c94572-33f4-44b9-aff8-c472ea94f19f","Type":"ContainerDied","Data":"da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c"} Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.835840 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zf8v" event={"ID":"03c94572-33f4-44b9-aff8-c472ea94f19f","Type":"ContainerDied","Data":"ec8755254e4ad5a3270424e8f4b2fdea384b39973ac2e9f1622f1f3f6ba91355"} Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.835876 4803 scope.go:117] "RemoveContainer" containerID="da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.835869 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zf8v" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.870944 4803 scope.go:117] "RemoveContainer" containerID="8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.895264 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4zf8v"] Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.903556 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4zf8v"] Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.905481 4803 scope.go:117] "RemoveContainer" containerID="5f5709c6fb3d372e7ee7c4c70a7de78153b34489a0f5fd6c7157cb2c1b8f67e6" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.954309 4803 scope.go:117] "RemoveContainer" containerID="da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c" Jan 27 23:17:08 crc kubenswrapper[4803]: E0127 23:17:08.954677 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c\": container with ID starting with da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c not found: ID does not exist" containerID="da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.954726 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c"} err="failed to get container status \"da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c\": rpc error: code = NotFound desc = could not find container \"da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c\": container with ID starting with da7b74ebdefcc9138b78b2861508ed22f78012f731946c2e4b34f60effdfbf2c not found: ID does not exist" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.954754 4803 scope.go:117] "RemoveContainer" containerID="8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c" Jan 27 23:17:08 crc kubenswrapper[4803]: E0127 23:17:08.955094 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c\": container with ID starting with 8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c not found: ID does not exist" containerID="8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.955128 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c"} err="failed to get container status \"8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c\": rpc error: code = NotFound desc = could not find container \"8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c\": container with ID starting with 8268a48ccf4a97169a1b27e99e11705949194a78504bb0c503731a052598c74c not found: ID does not exist" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.955199 4803 scope.go:117] "RemoveContainer" containerID="5f5709c6fb3d372e7ee7c4c70a7de78153b34489a0f5fd6c7157cb2c1b8f67e6" Jan 27 23:17:08 crc kubenswrapper[4803]: E0127 23:17:08.955562 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5709c6fb3d372e7ee7c4c70a7de78153b34489a0f5fd6c7157cb2c1b8f67e6\": container with ID starting with 5f5709c6fb3d372e7ee7c4c70a7de78153b34489a0f5fd6c7157cb2c1b8f67e6 not found: ID does not exist" containerID="5f5709c6fb3d372e7ee7c4c70a7de78153b34489a0f5fd6c7157cb2c1b8f67e6" Jan 27 23:17:08 crc kubenswrapper[4803]: I0127 23:17:08.955592 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5709c6fb3d372e7ee7c4c70a7de78153b34489a0f5fd6c7157cb2c1b8f67e6"} err="failed to get container status \"5f5709c6fb3d372e7ee7c4c70a7de78153b34489a0f5fd6c7157cb2c1b8f67e6\": rpc error: code = NotFound desc = could not find container \"5f5709c6fb3d372e7ee7c4c70a7de78153b34489a0f5fd6c7157cb2c1b8f67e6\": container with ID starting with 5f5709c6fb3d372e7ee7c4c70a7de78153b34489a0f5fd6c7157cb2c1b8f67e6 not found: ID does not exist" Jan 27 23:17:10 crc kubenswrapper[4803]: I0127 23:17:10.321087 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c94572-33f4-44b9-aff8-c472ea94f19f" path="/var/lib/kubelet/pods/03c94572-33f4-44b9-aff8-c472ea94f19f/volumes" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.297929 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8z4lq"] Jan 27 23:17:53 crc kubenswrapper[4803]: E0127 23:17:53.299504 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c94572-33f4-44b9-aff8-c472ea94f19f" containerName="registry-server" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.299539 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c94572-33f4-44b9-aff8-c472ea94f19f" containerName="registry-server" Jan 27 23:17:53 crc kubenswrapper[4803]: E0127 23:17:53.299622 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c94572-33f4-44b9-aff8-c472ea94f19f" containerName="extract-utilities" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.299641 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c94572-33f4-44b9-aff8-c472ea94f19f" containerName="extract-utilities" Jan 27 23:17:53 crc kubenswrapper[4803]: E0127 23:17:53.299676 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" containerName="extract-content" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.299694 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" containerName="extract-content" Jan 27 23:17:53 crc kubenswrapper[4803]: E0127 23:17:53.299713 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c94572-33f4-44b9-aff8-c472ea94f19f" containerName="extract-content" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.299725 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c94572-33f4-44b9-aff8-c472ea94f19f" containerName="extract-content" Jan 27 23:17:53 crc kubenswrapper[4803]: E0127 23:17:53.299751 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" containerName="registry-server" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.299763 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" containerName="registry-server" Jan 27 23:17:53 crc kubenswrapper[4803]: E0127 23:17:53.299789 4803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" containerName="extract-utilities" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.299801 4803 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" containerName="extract-utilities" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.300435 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ef7dc5-7b4f-4942-9dec-c3554d67e87f" containerName="registry-server" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.300488 4803 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c94572-33f4-44b9-aff8-c472ea94f19f" containerName="registry-server" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.304554 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.309773 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8z4lq"] Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.362694 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-utilities\") pod \"redhat-operators-8z4lq\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.362766 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k55bn\" (UniqueName: \"kubernetes.io/projected/f93741d1-b7f5-4416-a533-0613f9e5e533-kube-api-access-k55bn\") pod \"redhat-operators-8z4lq\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.362823 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-catalog-content\") pod \"redhat-operators-8z4lq\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.465705 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k55bn\" (UniqueName: \"kubernetes.io/projected/f93741d1-b7f5-4416-a533-0613f9e5e533-kube-api-access-k55bn\") pod \"redhat-operators-8z4lq\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.465778 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-catalog-content\") pod \"redhat-operators-8z4lq\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.465970 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-utilities\") pod \"redhat-operators-8z4lq\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.466319 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-catalog-content\") pod \"redhat-operators-8z4lq\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.466417 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-utilities\") pod \"redhat-operators-8z4lq\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.484615 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k55bn\" (UniqueName: \"kubernetes.io/projected/f93741d1-b7f5-4416-a533-0613f9e5e533-kube-api-access-k55bn\") pod \"redhat-operators-8z4lq\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:17:53 crc kubenswrapper[4803]: I0127 23:17:53.638479 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:17:54 crc kubenswrapper[4803]: I0127 23:17:54.533516 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8z4lq"] Jan 27 23:17:55 crc kubenswrapper[4803]: I0127 23:17:55.401416 4803 generic.go:334] "Generic (PLEG): container finished" podID="f93741d1-b7f5-4416-a533-0613f9e5e533" containerID="794e4fc77828241f36155f7ebe03b3c47a42ca33ba056095d5e5e1ce805edf7d" exitCode=0 Jan 27 23:17:55 crc kubenswrapper[4803]: I0127 23:17:55.401498 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8z4lq" event={"ID":"f93741d1-b7f5-4416-a533-0613f9e5e533","Type":"ContainerDied","Data":"794e4fc77828241f36155f7ebe03b3c47a42ca33ba056095d5e5e1ce805edf7d"} Jan 27 23:17:55 crc kubenswrapper[4803]: I0127 23:17:55.401741 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8z4lq" event={"ID":"f93741d1-b7f5-4416-a533-0613f9e5e533","Type":"ContainerStarted","Data":"87a9ddc0d5d79de2ef4e048c3e2e76a5ed5222f6646c80866406b5882cfc100d"} Jan 27 23:17:57 crc kubenswrapper[4803]: I0127 23:17:57.424000 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8z4lq" event={"ID":"f93741d1-b7f5-4416-a533-0613f9e5e533","Type":"ContainerStarted","Data":"fac8b6a3556887e4dbbf116344eb273851be111839de004e048de65941b88f1b"} Jan 27 23:18:01 crc kubenswrapper[4803]: I0127 23:18:01.469784 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8z4lq" event={"ID":"f93741d1-b7f5-4416-a533-0613f9e5e533","Type":"ContainerDied","Data":"fac8b6a3556887e4dbbf116344eb273851be111839de004e048de65941b88f1b"} Jan 27 23:18:01 crc kubenswrapper[4803]: I0127 23:18:01.469734 4803 generic.go:334] "Generic (PLEG): container finished" podID="f93741d1-b7f5-4416-a533-0613f9e5e533" containerID="fac8b6a3556887e4dbbf116344eb273851be111839de004e048de65941b88f1b" exitCode=0 Jan 27 23:18:02 crc kubenswrapper[4803]: I0127 23:18:02.489381 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8z4lq" event={"ID":"f93741d1-b7f5-4416-a533-0613f9e5e533","Type":"ContainerStarted","Data":"2c51dcac5473be4cad1fe6fea58b53fa3bb565b75c6ecfeba6bd27398fc3e2b7"} Jan 27 23:18:02 crc kubenswrapper[4803]: I0127 23:18:02.528325 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8z4lq" podStartSLOduration=3.030903542 podStartE2EDuration="9.52830255s" podCreationTimestamp="2026-01-27 23:17:53 +0000 UTC" firstStartedPulling="2026-01-27 23:17:55.404570717 +0000 UTC m=+5427.820592416" lastFinishedPulling="2026-01-27 23:18:01.901969725 +0000 UTC m=+5434.317991424" observedRunningTime="2026-01-27 23:18:02.517748854 +0000 UTC m=+5434.933770553" watchObservedRunningTime="2026-01-27 23:18:02.52830255 +0000 UTC m=+5434.944324249" Jan 27 23:18:03 crc kubenswrapper[4803]: I0127 23:18:03.639589 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:18:03 crc kubenswrapper[4803]: I0127 23:18:03.639647 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:18:04 crc kubenswrapper[4803]: I0127 23:18:04.696742 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8z4lq" podUID="f93741d1-b7f5-4416-a533-0613f9e5e533" containerName="registry-server" probeResult="failure" output=< Jan 27 23:18:04 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:18:04 crc kubenswrapper[4803]: > Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.437970 4803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cck8b"] Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.440907 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.452145 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cck8b"] Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.496834 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-catalog-content\") pod \"certified-operators-cck8b\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.497171 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-utilities\") pod \"certified-operators-cck8b\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.497600 4803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkck\" (UniqueName: \"kubernetes.io/projected/2684b25c-93fe-445e-b047-c3f7d7f93570-kube-api-access-rqkck\") pod \"certified-operators-cck8b\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.600593 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqkck\" (UniqueName: \"kubernetes.io/projected/2684b25c-93fe-445e-b047-c3f7d7f93570-kube-api-access-rqkck\") pod \"certified-operators-cck8b\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.600714 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-utilities\") pod \"certified-operators-cck8b\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.600742 4803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-catalog-content\") pod \"certified-operators-cck8b\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.601497 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-utilities\") pod \"certified-operators-cck8b\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.601526 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-catalog-content\") pod \"certified-operators-cck8b\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.625795 4803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqkck\" (UniqueName: \"kubernetes.io/projected/2684b25c-93fe-445e-b047-c3f7d7f93570-kube-api-access-rqkck\") pod \"certified-operators-cck8b\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:07 crc kubenswrapper[4803]: I0127 23:18:07.762480 4803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:08 crc kubenswrapper[4803]: I0127 23:18:08.375345 4803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cck8b"] Jan 27 23:18:08 crc kubenswrapper[4803]: W0127 23:18:08.378256 4803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2684b25c_93fe_445e_b047_c3f7d7f93570.slice/crio-018b7d2875e76225d381a59d105f5203bfef184dda8706447914398c6de3dab6 WatchSource:0}: Error finding container 018b7d2875e76225d381a59d105f5203bfef184dda8706447914398c6de3dab6: Status 404 returned error can't find the container with id 018b7d2875e76225d381a59d105f5203bfef184dda8706447914398c6de3dab6 Jan 27 23:18:08 crc kubenswrapper[4803]: I0127 23:18:08.563881 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cck8b" event={"ID":"2684b25c-93fe-445e-b047-c3f7d7f93570","Type":"ContainerStarted","Data":"018b7d2875e76225d381a59d105f5203bfef184dda8706447914398c6de3dab6"} Jan 27 23:18:09 crc kubenswrapper[4803]: I0127 23:18:09.576494 4803 generic.go:334] "Generic (PLEG): container finished" podID="2684b25c-93fe-445e-b047-c3f7d7f93570" containerID="12a364d10cc201403a09889645ef7b2aefb4998bc12f09d9063e6359221aa2f3" exitCode=0 Jan 27 23:18:09 crc kubenswrapper[4803]: I0127 23:18:09.576551 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cck8b" event={"ID":"2684b25c-93fe-445e-b047-c3f7d7f93570","Type":"ContainerDied","Data":"12a364d10cc201403a09889645ef7b2aefb4998bc12f09d9063e6359221aa2f3"} Jan 27 23:18:10 crc kubenswrapper[4803]: I0127 23:18:10.588430 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cck8b" event={"ID":"2684b25c-93fe-445e-b047-c3f7d7f93570","Type":"ContainerStarted","Data":"6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21"} Jan 27 23:18:12 crc kubenswrapper[4803]: I0127 23:18:12.620106 4803 generic.go:334] "Generic (PLEG): container finished" podID="2684b25c-93fe-445e-b047-c3f7d7f93570" containerID="6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21" exitCode=0 Jan 27 23:18:12 crc kubenswrapper[4803]: I0127 23:18:12.620156 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cck8b" event={"ID":"2684b25c-93fe-445e-b047-c3f7d7f93570","Type":"ContainerDied","Data":"6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21"} Jan 27 23:18:13 crc kubenswrapper[4803]: I0127 23:18:13.635629 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cck8b" event={"ID":"2684b25c-93fe-445e-b047-c3f7d7f93570","Type":"ContainerStarted","Data":"6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4"} Jan 27 23:18:13 crc kubenswrapper[4803]: I0127 23:18:13.662971 4803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cck8b" podStartSLOduration=3.178480799 podStartE2EDuration="6.662950402s" podCreationTimestamp="2026-01-27 23:18:07 +0000 UTC" firstStartedPulling="2026-01-27 23:18:09.57900092 +0000 UTC m=+5441.995022619" lastFinishedPulling="2026-01-27 23:18:13.063470523 +0000 UTC m=+5445.479492222" observedRunningTime="2026-01-27 23:18:13.658624846 +0000 UTC m=+5446.074646565" watchObservedRunningTime="2026-01-27 23:18:13.662950402 +0000 UTC m=+5446.078972101" Jan 27 23:18:14 crc kubenswrapper[4803]: I0127 23:18:14.710338 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8z4lq" podUID="f93741d1-b7f5-4416-a533-0613f9e5e533" containerName="registry-server" probeResult="failure" output=< Jan 27 23:18:14 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:18:14 crc kubenswrapper[4803]: > Jan 27 23:18:16 crc kubenswrapper[4803]: I0127 23:18:16.344047 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:18:16 crc kubenswrapper[4803]: I0127 23:18:16.344438 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:18:17 crc kubenswrapper[4803]: I0127 23:18:17.763479 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:17 crc kubenswrapper[4803]: I0127 23:18:17.763889 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:18 crc kubenswrapper[4803]: I0127 23:18:18.815473 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cck8b" podUID="2684b25c-93fe-445e-b047-c3f7d7f93570" containerName="registry-server" probeResult="failure" output=< Jan 27 23:18:18 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:18:18 crc kubenswrapper[4803]: > Jan 27 23:18:24 crc kubenswrapper[4803]: I0127 23:18:24.689758 4803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8z4lq" podUID="f93741d1-b7f5-4416-a533-0613f9e5e533" containerName="registry-server" probeResult="failure" output=< Jan 27 23:18:24 crc kubenswrapper[4803]: timeout: failed to connect service ":50051" within 1s Jan 27 23:18:24 crc kubenswrapper[4803]: > Jan 27 23:18:27 crc kubenswrapper[4803]: I0127 23:18:27.820550 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:27 crc kubenswrapper[4803]: I0127 23:18:27.890271 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:28 crc kubenswrapper[4803]: I0127 23:18:28.067217 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cck8b"] Jan 27 23:18:29 crc kubenswrapper[4803]: I0127 23:18:29.825799 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cck8b" podUID="2684b25c-93fe-445e-b047-c3f7d7f93570" containerName="registry-server" containerID="cri-o://6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4" gracePeriod=2 Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.503327 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.637209 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqkck\" (UniqueName: \"kubernetes.io/projected/2684b25c-93fe-445e-b047-c3f7d7f93570-kube-api-access-rqkck\") pod \"2684b25c-93fe-445e-b047-c3f7d7f93570\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.637267 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-catalog-content\") pod \"2684b25c-93fe-445e-b047-c3f7d7f93570\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.637447 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-utilities\") pod \"2684b25c-93fe-445e-b047-c3f7d7f93570\" (UID: \"2684b25c-93fe-445e-b047-c3f7d7f93570\") " Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.639044 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-utilities" (OuterVolumeSpecName: "utilities") pod "2684b25c-93fe-445e-b047-c3f7d7f93570" (UID: "2684b25c-93fe-445e-b047-c3f7d7f93570"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.652116 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2684b25c-93fe-445e-b047-c3f7d7f93570-kube-api-access-rqkck" (OuterVolumeSpecName: "kube-api-access-rqkck") pod "2684b25c-93fe-445e-b047-c3f7d7f93570" (UID: "2684b25c-93fe-445e-b047-c3f7d7f93570"). InnerVolumeSpecName "kube-api-access-rqkck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.711958 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2684b25c-93fe-445e-b047-c3f7d7f93570" (UID: "2684b25c-93fe-445e-b047-c3f7d7f93570"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.740070 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqkck\" (UniqueName: \"kubernetes.io/projected/2684b25c-93fe-445e-b047-c3f7d7f93570-kube-api-access-rqkck\") on node \"crc\" DevicePath \"\"" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.740115 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.740128 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2684b25c-93fe-445e-b047-c3f7d7f93570-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.839009 4803 generic.go:334] "Generic (PLEG): container finished" podID="2684b25c-93fe-445e-b047-c3f7d7f93570" containerID="6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4" exitCode=0 Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.839082 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cck8b" event={"ID":"2684b25c-93fe-445e-b047-c3f7d7f93570","Type":"ContainerDied","Data":"6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4"} Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.839145 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cck8b" event={"ID":"2684b25c-93fe-445e-b047-c3f7d7f93570","Type":"ContainerDied","Data":"018b7d2875e76225d381a59d105f5203bfef184dda8706447914398c6de3dab6"} Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.839150 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cck8b" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.839167 4803 scope.go:117] "RemoveContainer" containerID="6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.874097 4803 scope.go:117] "RemoveContainer" containerID="6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.879716 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cck8b"] Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.891453 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cck8b"] Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.907000 4803 scope.go:117] "RemoveContainer" containerID="12a364d10cc201403a09889645ef7b2aefb4998bc12f09d9063e6359221aa2f3" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.954249 4803 scope.go:117] "RemoveContainer" containerID="6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4" Jan 27 23:18:30 crc kubenswrapper[4803]: E0127 23:18:30.954837 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4\": container with ID starting with 6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4 not found: ID does not exist" containerID="6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.954885 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4"} err="failed to get container status \"6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4\": rpc error: code = NotFound desc = could not find container \"6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4\": container with ID starting with 6c6cb1b7eedb9647e65d5b450cc08101d0ee685cb7c36ebd1427189c7f3f3fd4 not found: ID does not exist" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.954905 4803 scope.go:117] "RemoveContainer" containerID="6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21" Jan 27 23:18:30 crc kubenswrapper[4803]: E0127 23:18:30.955211 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21\": container with ID starting with 6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21 not found: ID does not exist" containerID="6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.955228 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21"} err="failed to get container status \"6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21\": rpc error: code = NotFound desc = could not find container \"6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21\": container with ID starting with 6fcd64a760687ad810c7fa144ac915e1d52ae976a48c78f88f846f74a63d2d21 not found: ID does not exist" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.955240 4803 scope.go:117] "RemoveContainer" containerID="12a364d10cc201403a09889645ef7b2aefb4998bc12f09d9063e6359221aa2f3" Jan 27 23:18:30 crc kubenswrapper[4803]: E0127 23:18:30.955820 4803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12a364d10cc201403a09889645ef7b2aefb4998bc12f09d9063e6359221aa2f3\": container with ID starting with 12a364d10cc201403a09889645ef7b2aefb4998bc12f09d9063e6359221aa2f3 not found: ID does not exist" containerID="12a364d10cc201403a09889645ef7b2aefb4998bc12f09d9063e6359221aa2f3" Jan 27 23:18:30 crc kubenswrapper[4803]: I0127 23:18:30.955933 4803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a364d10cc201403a09889645ef7b2aefb4998bc12f09d9063e6359221aa2f3"} err="failed to get container status \"12a364d10cc201403a09889645ef7b2aefb4998bc12f09d9063e6359221aa2f3\": rpc error: code = NotFound desc = could not find container \"12a364d10cc201403a09889645ef7b2aefb4998bc12f09d9063e6359221aa2f3\": container with ID starting with 12a364d10cc201403a09889645ef7b2aefb4998bc12f09d9063e6359221aa2f3 not found: ID does not exist" Jan 27 23:18:32 crc kubenswrapper[4803]: I0127 23:18:32.321654 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2684b25c-93fe-445e-b047-c3f7d7f93570" path="/var/lib/kubelet/pods/2684b25c-93fe-445e-b047-c3f7d7f93570/volumes" Jan 27 23:18:33 crc kubenswrapper[4803]: I0127 23:18:33.738312 4803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:18:33 crc kubenswrapper[4803]: I0127 23:18:33.793723 4803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:18:34 crc kubenswrapper[4803]: I0127 23:18:34.465391 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8z4lq"] Jan 27 23:18:34 crc kubenswrapper[4803]: I0127 23:18:34.889541 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8z4lq" podUID="f93741d1-b7f5-4416-a533-0613f9e5e533" containerName="registry-server" containerID="cri-o://2c51dcac5473be4cad1fe6fea58b53fa3bb565b75c6ecfeba6bd27398fc3e2b7" gracePeriod=2 Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:35.901890 4803 generic.go:334] "Generic (PLEG): container finished" podID="f93741d1-b7f5-4416-a533-0613f9e5e533" containerID="2c51dcac5473be4cad1fe6fea58b53fa3bb565b75c6ecfeba6bd27398fc3e2b7" exitCode=0 Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:35.901929 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8z4lq" event={"ID":"f93741d1-b7f5-4416-a533-0613f9e5e533","Type":"ContainerDied","Data":"2c51dcac5473be4cad1fe6fea58b53fa3bb565b75c6ecfeba6bd27398fc3e2b7"} Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.118439 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.288879 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-catalog-content\") pod \"f93741d1-b7f5-4416-a533-0613f9e5e533\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.289085 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-utilities\") pod \"f93741d1-b7f5-4416-a533-0613f9e5e533\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.289133 4803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k55bn\" (UniqueName: \"kubernetes.io/projected/f93741d1-b7f5-4416-a533-0613f9e5e533-kube-api-access-k55bn\") pod \"f93741d1-b7f5-4416-a533-0613f9e5e533\" (UID: \"f93741d1-b7f5-4416-a533-0613f9e5e533\") " Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.290901 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-utilities" (OuterVolumeSpecName: "utilities") pod "f93741d1-b7f5-4416-a533-0613f9e5e533" (UID: "f93741d1-b7f5-4416-a533-0613f9e5e533"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.299643 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f93741d1-b7f5-4416-a533-0613f9e5e533-kube-api-access-k55bn" (OuterVolumeSpecName: "kube-api-access-k55bn") pod "f93741d1-b7f5-4416-a533-0613f9e5e533" (UID: "f93741d1-b7f5-4416-a533-0613f9e5e533"). InnerVolumeSpecName "kube-api-access-k55bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.393277 4803 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.393305 4803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k55bn\" (UniqueName: \"kubernetes.io/projected/f93741d1-b7f5-4416-a533-0613f9e5e533-kube-api-access-k55bn\") on node \"crc\" DevicePath \"\"" Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.416363 4803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f93741d1-b7f5-4416-a533-0613f9e5e533" (UID: "f93741d1-b7f5-4416-a533-0613f9e5e533"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.495524 4803 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93741d1-b7f5-4416-a533-0613f9e5e533-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.922323 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8z4lq" event={"ID":"f93741d1-b7f5-4416-a533-0613f9e5e533","Type":"ContainerDied","Data":"87a9ddc0d5d79de2ef4e048c3e2e76a5ed5222f6646c80866406b5882cfc100d"} Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.922805 4803 scope.go:117] "RemoveContainer" containerID="2c51dcac5473be4cad1fe6fea58b53fa3bb565b75c6ecfeba6bd27398fc3e2b7" Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.922445 4803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8z4lq" Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.964934 4803 scope.go:117] "RemoveContainer" containerID="fac8b6a3556887e4dbbf116344eb273851be111839de004e048de65941b88f1b" Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.986397 4803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8z4lq"] Jan 27 23:18:36 crc kubenswrapper[4803]: I0127 23:18:36.995391 4803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8z4lq"] Jan 27 23:18:37 crc kubenswrapper[4803]: I0127 23:18:37.008241 4803 scope.go:117] "RemoveContainer" containerID="794e4fc77828241f36155f7ebe03b3c47a42ca33ba056095d5e5e1ce805edf7d" Jan 27 23:18:38 crc kubenswrapper[4803]: I0127 23:18:38.324200 4803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f93741d1-b7f5-4416-a533-0613f9e5e533" path="/var/lib/kubelet/pods/f93741d1-b7f5-4416-a533-0613f9e5e533/volumes" Jan 27 23:18:46 crc kubenswrapper[4803]: I0127 23:18:46.343624 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:18:46 crc kubenswrapper[4803]: I0127 23:18:46.344330 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:19:16 crc kubenswrapper[4803]: I0127 23:19:16.343782 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:19:16 crc kubenswrapper[4803]: I0127 23:19:16.344399 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:19:16 crc kubenswrapper[4803]: I0127 23:19:16.344443 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 23:19:16 crc kubenswrapper[4803]: I0127 23:19:16.345957 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1fd79617cecafa0e5a6b165ba06fdf6bea7229ea8721f4bf879baf531e446c8"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 23:19:16 crc kubenswrapper[4803]: I0127 23:19:16.346158 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://f1fd79617cecafa0e5a6b165ba06fdf6bea7229ea8721f4bf879baf531e446c8" gracePeriod=600 Jan 27 23:19:17 crc kubenswrapper[4803]: I0127 23:19:17.413241 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="f1fd79617cecafa0e5a6b165ba06fdf6bea7229ea8721f4bf879baf531e446c8" exitCode=0 Jan 27 23:19:17 crc kubenswrapper[4803]: I0127 23:19:17.413328 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"f1fd79617cecafa0e5a6b165ba06fdf6bea7229ea8721f4bf879baf531e446c8"} Jan 27 23:19:17 crc kubenswrapper[4803]: I0127 23:19:17.413624 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerStarted","Data":"430de37fe6591235bc9be12b3793848838911a8213dd6159a10919eaa2b824ec"} Jan 27 23:19:17 crc kubenswrapper[4803]: I0127 23:19:17.413647 4803 scope.go:117] "RemoveContainer" containerID="32ec2b5f27230b260aaf053e26445cb0d34ee85bbd1c97ba3eb6b8978d07e16d" Jan 27 23:21:16 crc kubenswrapper[4803]: I0127 23:21:16.343654 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:21:16 crc kubenswrapper[4803]: I0127 23:21:16.344410 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:21:46 crc kubenswrapper[4803]: I0127 23:21:46.343665 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:21:46 crc kubenswrapper[4803]: I0127 23:21:46.344578 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:22:16 crc kubenswrapper[4803]: I0127 23:22:16.343544 4803 patch_prober.go:28] interesting pod/machine-config-daemon-d56gp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 23:22:16 crc kubenswrapper[4803]: I0127 23:22:16.344190 4803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 23:22:16 crc kubenswrapper[4803]: I0127 23:22:16.344243 4803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" Jan 27 23:22:16 crc kubenswrapper[4803]: I0127 23:22:16.345424 4803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"430de37fe6591235bc9be12b3793848838911a8213dd6159a10919eaa2b824ec"} pod="openshift-machine-config-operator/machine-config-daemon-d56gp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 23:22:16 crc kubenswrapper[4803]: I0127 23:22:16.345503 4803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerName="machine-config-daemon" containerID="cri-o://430de37fe6591235bc9be12b3793848838911a8213dd6159a10919eaa2b824ec" gracePeriod=600 Jan 27 23:22:16 crc kubenswrapper[4803]: E0127 23:22:16.477608 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336" Jan 27 23:22:16 crc kubenswrapper[4803]: I0127 23:22:16.764480 4803 generic.go:334] "Generic (PLEG): container finished" podID="aeb23e3d-ee70-4f1d-85c0-005373cca336" containerID="430de37fe6591235bc9be12b3793848838911a8213dd6159a10919eaa2b824ec" exitCode=0 Jan 27 23:22:16 crc kubenswrapper[4803]: I0127 23:22:16.764801 4803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" event={"ID":"aeb23e3d-ee70-4f1d-85c0-005373cca336","Type":"ContainerDied","Data":"430de37fe6591235bc9be12b3793848838911a8213dd6159a10919eaa2b824ec"} Jan 27 23:22:16 crc kubenswrapper[4803]: I0127 23:22:16.765124 4803 scope.go:117] "RemoveContainer" containerID="f1fd79617cecafa0e5a6b165ba06fdf6bea7229ea8721f4bf879baf531e446c8" Jan 27 23:22:16 crc kubenswrapper[4803]: I0127 23:22:16.765975 4803 scope.go:117] "RemoveContainer" containerID="430de37fe6591235bc9be12b3793848838911a8213dd6159a10919eaa2b824ec" Jan 27 23:22:16 crc kubenswrapper[4803]: E0127 23:22:16.766399 4803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d56gp_openshift-machine-config-operator(aeb23e3d-ee70-4f1d-85c0-005373cca336)\"" pod="openshift-machine-config-operator/machine-config-daemon-d56gp" podUID="aeb23e3d-ee70-4f1d-85c0-005373cca336"